Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Chapter 3 Solaris Live Upgrade (Planning)

This chapter provides guidelines and requirements for review before installing and using Solaris Live Upgrade. You also should review general information about upgrading in Upgrade Planning in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For planning information for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 12, Solaris Live Upgrade for ZFS (Planning).


This chapter contains the following sections:

Solaris Live Upgrade Requirements

Before you install and use Solaris Live Upgrade, become familiar with these requirements.

Solaris Live Upgrade System Requirements

Solaris Live Upgrade is included in the Solaris software. You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 5/09 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 5/09 release.

Table 3–1 lists releases that are supported by Solaris Live Upgrade.

Table 3–1 Supported Solaris Releases

Your Current Release 

Compatible Upgrade Release 

Solaris 8 OS 

Solaris 8, 9, or any Solaris 10 release 

Solaris 9 OS 

Solaris 9 or any Solaris 10 release 

Solaris 10 OS 

Any Solaris 10 release 

Installing Solaris Live Upgrade

You can install the Solaris Live Upgrade packages by using the following:

Be aware that the following patches might need to be installed for the correct operation of Solaris Live Upgrade.

Description 

For More Information 

Caution: Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.


x86 only –

If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not executable: 
</sbin/biosdev>.
ERROR: One or more patches required 
by Live Upgrade has not been installed.

The patches listed in info doc 206844 (formerly 72099) are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

If you are running the Solaris 8 or 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com.

For instructions about installing the Solaris Live Upgrade software, see Installing Solaris Live Upgrade.

Required Packages

If you have problems with Solaris Live Upgrade, you might be missing packages. In the following table, check that your OS has the listed packages , which are required to use Solaris Live Upgrade.

For the Solaris 10 release:

For information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Table 3–2 Required Packages for Solaris Live Upgrade

Solaris 8 Release 

Solaris 9 Release 

Solaris 10 Release 

SUNWadmap 

SUNWadmap 

SUNWadmap 

SUNWadmc 

SUNWadmc 

SUNWadmlib-sysid 

SUNWlibC 

SUNWadmfw 

SUNWadmr 

SUNWbzip 

SUNWlibC 

SUNWlibC 

SUNWgzip 

SUNWgzip 

For Solaris 10 3/05 only: SUNWgzip

SUNWj2rt 


Note –

The SUNWj2rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


SUNWj2rt  


Note –

The SUNWj2rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


SUNWj5rt 


Note –

The SUNWj5rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


To check for packages on your system, type the following command.


% pkginfo package_name

Solaris Live Upgrade Disk Space Requirements

Follow general disk space requirements for an upgrade. See Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

To estimate the file system size that is needed to create a boot environment, start the creation of a new boot environment. The size is calculated. You can then abort the process.

The disk on the new boot environment must be able to serve as a boot device. Some systems restrict which disks can serve as a boot device. Refer to your system's documentation to determine if any boot restrictions apply.

The disk might need to be prepared before you create the new boot environment. Check that the disk is formatted properly:

Solaris Live Upgrade Requirements if Creating RAID-1 Volumes (Mirrors)

Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems that are RAID-1 volumes (mirrors). Solaris Live Upgrade does not implement the full functionality of Solaris Volume Manager, but does require the following components of Solaris Volume Manager.

Table 3–3 Required Components for Solaris Live Upgrade and RAID-1 Volumes

Requirement  

Description 

For More Information 

You must create at least one state database and at least three state database replicas.  

A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. When a state database is copied, the replica protects against data loss from single points of failure. 

For information about creating a state database, see Chapter 6, State Database (Overview), in Solaris Volume Manager Administration Guide.

Solaris Live Upgrade supports only a RAID-1 volume (mirror) with single-slice concatenations on the root (/) file system.

A concatenation is a RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a RAID-1 volume 

A RAID—1 volume can be comprised of a maximum of three concatenations.  

For guidelines about creating mirrored file systems, see Guidelines for Selecting Slices for Mirrored File Systems.

Upgrading a System With Packages or Patches

You can use Solaris Live Upgrade to add patches and packages to a system. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command. When you use luupgrade command, you can also use a Solaris Flash archive to install patches or packages.


Caution – Caution –

When upgrading and adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software during an upgrade to fail or alter the active boot environment.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


Type of Installation 

Description 

For More Information 

Adding patches to a boot environment  

Create a new boot environment and use the luupgrade command with the -t option.

To Add Patches to a Network Installation Image on a Boot Environment

Adding packages to a boot environment 

Use the luupgrade command with the -p option.

To Add Packages to a Network Installation Image on a Boot Environment

Using Solaris Live Upgrade to install a Solaris Flash archive 

An archive contains a complete copy of a boot environment with new packages and patches already included. This copy can be installed on multiple systems. 

Upgrading and Patching Limitations

For upgrading and patching limitations, see Upgrading and Patching Limitations in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Guidelines for Creating File Systems With the lucreate Command

The lucreate -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. When using the -m option to create file systems, follow these guidelines:

Guidelines for Selecting Slices for File Systems

When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.

Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.

For Solaris Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.

Guidelines for Selecting a Slice for the root (/) File System

When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:

Guidelines for Selecting Slices for Mirrored File Systems

You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:

When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:


Note –

If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Solaris Live Upgrade Running Veritas VxVm.


General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems

Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.

For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Checking Status of Volumes

If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).

Detaching Volumes and Resynchronizing Mirrors

If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.

Resynchronization is the process of copying data from one submirror to another submirror after the following problems:

For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.

Using Solaris Volume Manager Commands

Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.

However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Solaris Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.

Guidelines for Selecting a Slice for a Swap Volume

These guidelines contain configuration recommendations and examples for a swap slice.

Configuring Swap for the New Boot Environment

You can configure a swap slice in three ways by using the lucreate command with the -m option:

The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.

Failed Boot Environment Creation if Swap is in Use

A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.

Guidelines for Selecting Slices for Shareable File Systems

Solaris Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.

Reconfiguring a disk 

Examples 

For More Information 

You can reslice the disk before creating the new boot environment and put the shareable file system on its own slice.  

For example, if the root (/) file system, /var, and /home are on the same slice, reconfigure the disk and put /home on its own slice. When you create any new boot environments, /home is shared with the new boot environment by default.

format(1M)

If you want to share a directory, the directory must be split off to its own slice. The directory is then a file system that can be shared with another boot environment. You can use the lucreate command with the -m option to create a new boot environment and split a directory off to its own slice. But, the new file system cannot yet be shared with the original boot environment. You need to run the lucreate command with the -m option again to create another boot environment. The two new boot environments can then share the directory.

For example, if you wanted to upgrade from the Solaris 9 release to the Solaris 10 5/09 release and share /home, you could run the lucreate command with the -m option. You could create a Solaris 9 release with /home as a separate file system on its own slice. Then run the lucreate command with the -m option again to duplicate that boot environment. This third boot environment can then be upgraded to the Solaris 10 5/09 release. /home is shared between the Solaris 9 and Solaris 10 5/09 releases.

For a description of shareable and critical file systems, see File System Types.

Customizing a New Boot Environment's Content

When you create a new boot environment, some directories and files can be excluded from a copy to the new boot environment. If you have excluded a directory, you can also reinstate specified subdirectories or files under the excluded directory. These subdirectories or files that have been restored are then copied to the new boot environment. For example, you could exclude from the copy all files and directories in /etc/mail, but include all files and directories in /etc/mail/staff. The following command copies the staff subdirectory to the new boot environment.


# lucreate -n second_disk -x /etc/mail -y /etc/mail/staff

Caution – Caution –

Use the file-exclusion options with caution. Do not remove files or directories that are required by the system.


The following table lists the lucreate command options for removing and restoring directories and files.

How Specified? 

Exclude Options  

Include Options 

Specify the name of the directory or file 

-x exclude_dir

-y include_dir

Use a file that contains a list 

-f list_filename

-z list_filename

-Y list_filename

-z list_filename

For examples of customizing the directories and files when creating a boot environment, see To Create a Boot Environment and Customize the Content.

Synchronizing Files Between Boot Environments

When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain critical system files and directories might be copied from the last-active boot environment to the boot environment being booted. Those files and directories that have changed are copied.

Adding Files to the /etc/lu/synclist

Solaris Live Upgrade checks for critical files that have changed. If these files' content is not the same in both boot environments, they are copied from the active boot environment to the new boot environment. Synchronizing is meant for critical files such as /etc/passwd or /etc/group files that might have changed since the new boot environment was created.

The /etc/lu/synclist file contains a list of directories and files that are synchronized. In some instances, you might want to copy other files from the active boot environment to the new boot environment. You can add directories and files to /etc/lu/synclist if necessary.

Adding files not listed in the /etc/lu/synclist could cause a system to become unbootable. The synchronization process only copies files and creates directories. The process does not remove files and directories.

The following example of the /etc/lu/synclist file shows the standard directories and files that are synchronized for this system.


/var/mail                    OVERWRITE
/var/spool/mqueue            OVERWRITE
/var/spool/cron/crontabs     OVERWRITE
/var/dhcp                    OVERWRITE
/etc/passwd                  OVERWRITE
/etc/shadow                  OVERWRITE
/etc/opasswd                 OVERWRITE
/etc/oshadow                 OVERWRITE
/etc/group                   OVERWRITE
/etc/pwhist                  OVERWRITE
/etc/default/passwd          OVERWRITE
/etc/dfs                     OVERWRITE
/var/log/syslog              APPEND
/var/adm/messages            APPEND

Examples of directories and files that might be appropriate to add to the synclist file are the following:


/var/yp                    OVERWRITE
/etc/mail                  OVERWRITE
/etc/resolv.conf           OVERWRITE
/etc/domainname            OVERWRITE

The synclist file entries can be files or directories. The second field is the method of updating that occurs on the activation of the boot environment. You can choose from three methods to update files:

Forcing a Synchronization Between Boot Environments

The first time you boot from a newly created boot environment, Solaris Live Upgrade synchronizes the new boot environment with the boot environment that was last active. After this initial boot and synchronization, Solaris Live Upgrade does not perform a synchronization unless requested. To force a synchronization, you use the luactivate command with the -s option.

You might want to force a synchronization if you are maintaining multiple versions of the Solaris OS. You might want changes in files such as email or passwd/group to be in the boot environment you are activating to. If you force a synchronization, Solaris Live Upgrade checks for conflicts between files that are subject to synchronization. When the new boot environment is booted and a conflict is detected, a warning is issued and the files are not synchronized. Activation can be completed successfully, despite such a conflict. A conflict can occur if you make changes to the same file on both the new boot environment and the active boot environment. For example, you make changes to the /etc/passwd file on the original boot environment. Then you make other changes to /etc/passwd file on the new boot environment. The synchronization process cannot choose which file to copy for the synchronization.


Caution – Caution –

Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 5/09 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 5/09 files might not be compatible with the Solaris 9 files.


Booting Multiple Boot Environments

If your system has more than one OS installed on the system, you can boot from these boot environments for both SPARC and x86 platforms. The boot environments available for booting include Solaris Live Upgrade inactive boot environments.

On both SPARC and x86 based systems, each ZFS root pool has a dataset designated as the default root file system. If for SPARC, you type the boot command or for x86, you take the default from the GRUB menu, then this default root file system is booted.


Note –

If the GRUB menu has been explicitly modified to designate a default menu item other than the one set by Solaris Live Upgrade, then selecting that default menu entry might not result in the booting of the pool's default root file system.


For more information about booting and modifying the GRUB boot menu, see the following references.

Task 

Information 

To activate a boot environment with the GRUB menu 

x86: To Activate a Boot Environment With the GRUB Menu

To fall back to the original boot environment with a GRUB menu 

x86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu

For SPARC and x86 information and step-by-step procedures for booting and modifying boot behavior 

System Administration Guide: Basic Administration

For an overview and step-by-step procedures for booting ZFS boot environments 

Booting From a ZFS Root File System in Solaris ZFS Administration Guide

Solaris Live Upgrade Character User Interface

Sun no longer recommends use of the lu command. The lu command displays a character user interface (CUI). The underlying command sequence for the CUI, typically the lucreate, luupgrade, and luactivate commands, is straightforward to use. Procedures for these commands are provided in the following chapters.