This chapter provides guidelines and requirements for review before installing and using Solaris Live Upgrade. You also should review general information about upgrading in Upgrade Planning in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade. This chapter contains the following sections:
Before you install and use Solaris Live Upgrade, become familiar with these requirements.
Solaris Live Upgrade is included in the Solaris software. You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 11/06 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 11/06 release.
Table 3–1 lists releases that are supported by Solaris Live Upgrade.
Table 3–1 Supported Solaris Releases
Your Current Release |
Compatible Upgrade Release |
---|---|
Solaris 8 OS |
Solaris 8, 9, or any Solaris 10 release |
Solaris 9 OS |
Solaris 9 or any Solaris 10 release |
Solaris 10 OS |
Any Solaris 10 release |
You can install the Solaris Live Upgrade packages by using the following:
The pkgadd command. The Solaris Live Upgrade packages are SUNWlur and SUNWluu, and these packages must be installed in that order.
An installer on the Solaris Operating System DVD, the Solaris Software - 2 CD, or a net installation image.
Be aware that the following patches might need to be installed for the correct operation of Solaris Live Upgrade.
Description |
For More Information |
|
---|---|---|
Caution: Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches. x86 only – If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.
The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. |
Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site. |
|
If you are running the Solaris 8 or 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment recommended to run the Solaris Live Upgrade installer and install the packages. |
To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com. |
For instructions about installing the Solaris Live Upgrade software, see Installing Solaris Live Upgrade.
If you have problems with Solaris Live Upgrade, you might be missing packages. In the following table, check that your OS has the listed packages , which are required to use Solaris Live Upgrade.
For the Solaris 10 release:
If you install one of the following software groups, these software groups contain all the required Solaris Live Upgrade packages.
Entire Solaris Software Group Plus OEM Support
Entire Solaris Software Group
Developer Solaris Software Group
End User Solaris Software Group
If you install one of these Software Groups, then you might not have all the packages required to use Solaris Live Upgrade.
Core System Support Software Group
Reduced Network Support Software Group
For information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
Table 3–2 Required Packages for Solaris Live Upgrade
To check for packages on your system, type the following command.
% pkginfo package_name |
Follow general disk space requirements for an upgrade. See Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
To estimate the file system size that is needed to create a boot environment, start the creation of a new boot environment. The size is calculated. You can then abort the process.
The disk on the new boot environment must be able to serve as a boot device. Some systems restrict which disks can serve as a boot device. Refer to your system's documentation to determine if any boot restrictions apply.
The disk might need to be prepared before you create the new boot environment. Check that the disk is formatted properly:
Identify slices large enough to hold the file systems to be copied.
Identify file systems that contain directories that you want to share between boot environments rather than copy. If you want a directory to be shared, you need to create a new boot environment with the directory put on its own slice. The directory is then a file system and can be shared with future boot environments. For more information about creating separate file systems for sharing, see Guidelines for Selecting Slices for Shareable File Systems.
Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems that are RAID-1 volumes (mirrors). Solaris Live Upgrade does not implement the full functionality of Solaris Volume Manager, but does require the following components of Solaris Volume Manager.
Table 3–3 Required Components for Solaris Live Upgrade and RAID-1 Volumes
Requirement |
Description |
For More Information |
---|---|---|
You must create at least one state database and at least three state database replicas. |
A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. When a state database is copied, the replica protects against data loss from single points of failure. |
For information about creating a state database, see Chapter 6, State Database (Overview), in Solaris Volume Manager Administration Guide. |
Solaris Live Upgrade supports only a RAID-1 volume (mirror) with single-slice concatenations on the root (/) file system. |
A concatenation is a RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a RAID-1 volume A RAID—1 volume can be comprised of a maximum of three concatenations. |
For guidelines about creating mirrored file systems, see Guidelines for Selecting Slices for Mirrored File Systems. |
You can use Solaris Live Upgrade to add patches and packages to a system. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command. When you use luupgrade command, you can also use a Solaris Flash archive to install patches or packages.
When upgrading and adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software during an upgrade to fail or alter the active boot environment.
For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).
Type of Installation |
Description |
For More Information |
---|---|---|
Adding patches to a boot environment |
Create a new boot environment and use the luupgrade command with the -t option. |
To Add Patches to an Operating System Image on a Boot Environment (Command-Line Interface). |
Adding packages to a boot environment |
Use the luupgrade command with the -p option. |
To Add Packages to an Operating System Image on a Boot Environment (Command-Line Interface) |
Using Solaris Live Upgrade to install a Solaris Flash archive |
An archive contains a complete copy of a boot environment with new packages and patches already included. This copy can be installed on multiple systems. |
|
The lucreate -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. When using the -m option to create file systems, follow these guidelines:
You must specify one -m option for the root (/) file system for the new boot environment. If you run lucreate without the -m option, the Configuration menu is displayed. The Configuration menu enables you to customize the new boot environment by redirecting files onto new mount points.
Any critical file systems that exist in the current boot environment and that are not specified in a -m option are merged into the next highest-level file system created.
Only the file systems that are specified by the -m option are created on the new boot environment. To create the same number of files systems that is on your current system, you must specify one -m option for each file system to be created.
For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into the one file system that is specified by the -m option. If you specify the -m option twice, you create two file systems. If you have file systems for root (/), /opt, and /var, you would use one -m option for each file system on the new boot environment.
Do not duplicate a mount point. For example, you cannot have two root (/) file systems.
When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.
Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.
For Solaris Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.
When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:
Must be a slice from which the system can boot.
Must meet the recommended minimum size.
Can be on different physical disks or the same disk as the active root (/) file system.
Can be a Veritas Volume Manager volume (VxVM). If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:
A physical slice.
A single-slice concatenation that is included in a RAID-1 volume (mirror). The slice that contains the root (/) file system can be a RAID-1 volume.
A single-slice concatenation that is included in a RAID-0 volume. The slice that contains the root (/) file system can be a RAID-0 volume.
When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:
A physical slice in the form of /dev/dsk/cwtxdysz
A Solaris Volume Manager volume in the form of /dev/md/dsk/dnum
A Veritas Volume Manager volume in the form of /dev/vx/dsk/volume_name. If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Solaris Live Upgrade Running Veritas VxVm.
Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.
For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).
If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.
Resynchronization is the process of copying data from one submirror to another submirror after the following problems:
Submirror failures.
System crashes.
A submirror has been taken offline and brought back online.
The addition of a new submirror.
For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.
Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.
However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Solaris Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.
These guidelines contain configuration recommendations and examples for a swap slice.
You can configure a swap slice in three ways by using the lucreate command with the -m option:
If you do not specify a swap slice, the swap slices belonging to the current boot environment are configured for the new boot environment.
If you specify one or more swap slices, these slices are the only swap slices that are used by the new boot environment. The two boot environments do not share any swap slices.
You can specify to both share a swap slice and add a new slice for swap.
The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.
In the following example, no swap slice is specified. The new boot environment contains the root (/) file system on c0t1d0s0. Swap is shared between the current and new boot environment on c0t0d0s1.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs |
In the following example, a swap slice is specified. The new boot environment contains the root (/) file system on c0t1d0s0. A new swap file system is created on c0t1d0s1. No swap slice is shared between the current and new boot environment.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap |
In the following example, a swap slice is added and another swap slice is shared between the two boot environments. The new boot environment contains the root (/) file system on c0t1d0s0. A new swap slice is created on c0t1d0s1. The swap slice on c0t0d0s1 is shared between the current and new boot environment.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs -m -:shared:swap -m -:/dev/dsk/c0t1d0s1:swap |
A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.
Solaris Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.
Reconfiguring a disk |
Examples |
For More Information |
---|---|---|
You can reslice the disk before creating the new boot environment and put the shareable file system on its own slice. |
For example, if the root (/) file system, /var, and /home are on the same slice, reconfigure the disk and put /home on its own slice. When you create any new boot environments, /home is shared with the new boot environment by default. | |
If you want to share a directory, the directory must be split off to its own slice. The directory is then a file system that can be shared with another boot environment. You can use the lucreate command with the -m option to create a new boot environment and split a directory off to its own slice. But, the new file system cannot yet be shared with the original boot environment. You need to run the lucreate command with the -m option again to create another boot environment. The two new boot environments can then share the directory. |
For example, if you wanted to upgrade from the Solaris 9 release to the Solaris 10 11/06 release and share /home, you could run the lucreate command with the -m option. You could create a Solaris 9 release with /home as a separate file system on its own slice. Then run the lucreate command with the -m option again to duplicate that boot environment. This third boot environment can then be upgraded to the Solaris 10 11/06 release. /home is shared between the Solaris 9 and Solaris 10 11/06 releases. |
For a description of shareable and critical file systems, see File System Types. |
When you create a new boot environment, some directories and files can be excluded from a copy to the new boot environment. If you have excluded a directory, you can also reinstate specified subdirectories or files under the excluded directory. These subdirectories or files that have been restored are then copied to the new boot environment. For example, you could exclude from the copy all files and directories in /etc/mail, but include all files and directories in /etc/mail/staff. The following command copies the staff subdirectory to the new boot environment.
# lucreate -n second_disk -x /etc/mail -y /etc/mail/staff |
Use the file-exclusion options with caution. Do not remove files or directories that are required by the system.
The following table lists the lucreate command options for removing and restoring directories and files.
How Specified? |
Exclude Options |
Include Options |
---|---|---|
Specify the name of the directory or file |
-x exclude_dir |
-y include_dir |
Use a file that contains a list |
-f list_filename -z list_filename |
-Y list_filename -z list_filename |
For examples of customizing the directories and files when creating a boot environment, see To Create a Boot Environment and Customize the Content (Command-Line Interface).
When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain critical system files and directories might be copied from the last-active boot environment to the boot environment being booted. Those files and directories that have changed are copied.
Solaris Live Upgrade checks for critical files that have changed. If these files' content is not the same in both boot environments, they are copied from the active boot environment to the new boot environment. Synchronizing is meant for critical files such as /etc/passwd or /etc/group files that might have changed since the new boot environment was created.
The /etc/lu/synclist file contains a list of directories and files that are synchronized. In some instances, you might want to copy other files from the active boot environment to the new boot environment. You can add directories and files to /etc/lu/synclist if necessary.
Adding files not listed in the /etc/lu/synclist could cause a system to become unbootable. The synchronization process only copies files and creates directories. The process does not remove files and directories.
The following example of the /etc/lu/synclist file shows the standard directories and files that are synchronized for this system.
/var/mail OVERWRITE /var/spool/mqueue OVERWRITE /var/spool/cron/crontabs OVERWRITE /var/dhcp OVERWRITE /etc/passwd OVERWRITE /etc/shadow OVERWRITE /etc/opasswd OVERWRITE /etc/oshadow OVERWRITE /etc/group OVERWRITE /etc/pwhist OVERWRITE /etc/default/passwd OVERWRITE /etc/dfs OVERWRITE /var/log/syslog APPEND /var/adm/messages APPEND |
Examples of directories and files that might be appropriate to add to the synclist file are the following:
/var/yp OVERWRITE /etc/mail OVERWRITE /etc/resolv.conf OVERWRITE /etc/domainname OVERWRITE |
The synclist file entries can be files or directories. The second field is the method of updating that occurs on the activation of the boot environment. You can choose from three methods to update files:
OVERWRITE – The contents of the active boot environment's file overwrites the contents of the new boot environment file. OVERWRITE is the default action if no action is specified in the second field. If the entry is a directory, all subdirectories are copied. All files are overwritten. The new boot environment file has the same date, mode, and ownership as the same file on the previous boot environment.
APPEND – The contents of the active boot environment's file are added to the end of the new boot environment's file. This addition might lead to duplicate entries in the file. Directories cannot be listed as APPEND. The new boot environment file has the same date, mode, and ownership as the same file on the previous boot environment.
PREPEND – The contents of the active boot environment's file are added to the beginning of the new boot environment's file. This addition might lead to duplicate entries in the file. Directories can not be listed as PREPEND. The new boot environment file has the same date, mode, and ownership as the same file on the previous boot environment.
The first time you boot from a newly created boot environment, Solaris Live Upgrade synchronizes the new boot environment with the boot environment that was last active. After this initial boot and synchronization, Solaris Live Upgrade does not perform a synchronization unless requested.
To force synchronization by using the CUI, you type yes when prompted.
To force synchronization by using the CLI, you use the luactivate command with the -s option.
You might want to force a synchronization if you are maintaining multiple versions of the Solaris OS. You might want changes in files such as email or passwd/group to be in the boot environment you are activating to. If you force a synchronization, Solaris Live Upgrade checks for conflicts between files that are subject to synchronization. When the new boot environment is booted and a conflict is detected, a warning is issued and the files are not synchronized. Activation can be completed successfully, despite such a conflict. A conflict can occur if you make changes to the same file on both the new boot environment and the active boot environment. For example, you make changes to the /etc/passwd file on the original boot environment. Then you make other changes to /etc/passwd file on the new boot environment. The synchronization process cannot choose which file to copy for the synchronization.
Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.
Starting with the Solaris 10 1/06 release, a GRUB boot menu provides an optional method of switching between boot environments. The GRUB menu is an alternative to activating with the luactivate command or the Activate menu.
Task |
Information |
---|---|
To activate a boot environment with the GRUB menu |
x86: To Activate a Boot Environment With the GRUB Menu (Command-Line Interface) |
To fall back to the original boot environment with a GRUB menu |
x86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu |
For overview and planning information for GRUB | |
For a complete GRUB overview and system administration tasks |
When viewing the character user interface remotely, such as over a tip line, you might need to set the TERM environment variable to VT220. Also, when using the Common Desktop Environment (CDE), set the value of the TERM variable to dtterm, rather than xterm.