JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 10 8/11 Installation Guide: Live Upgrade and Upgrade Planning     Oracle Solaris 10 8/11 Information Library
search filter icon
search icon

Document Information

Preface

Part I Upgrading With Live Upgrade

1.  Where to Find Oracle Solaris Installation Planning Information

2.  Live Upgrade (Overview)

3.  Live Upgrade (Planning)

Live Upgrade Requirements

Live Upgrade System Requirements

Installing Live Upgrade

Required Packages

Live Upgrade Disk Space Requirements

Live Upgrade Requirements if Creating RAID-1 Volumes (Mirrors)

Upgrading a System With Packages or Patches

Upgrading and Patching Limitations

Guidelines for Creating File Systems With the lucreate Command

Guidelines for Selecting Slices for File Systems

Guidelines for Selecting a Slice for the root (/) File System

Guidelines for Selecting Slices for Mirrored File Systems

General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems

Guidelines for Selecting a Slice for a Swap Volume

Configuring Swap for the New Boot Environment

Failed Boot Environment Creation if Swap is in Use

Guidelines for Selecting Slices for Shareable File Systems

Customizing a New Boot Environment's Content

Synchronizing Files Between Boot Environments

Adding Files to the /etc/lu/synclist

Forcing a Synchronization Between Boot Environments

Booting Multiple Boot Environments

Live Upgrade Character User Interface

4.  Using Live Upgrade to Create a Boot Environment (Tasks)

5.  Upgrading With Live Upgrade (Tasks)

6.  Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

7.  Maintaining Live Upgrade Boot Environments (Tasks)

8.  Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed

9.  Live Upgrade (Examples)

10.  Live Upgrade (Command Reference)

Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool

11.  Live Upgrade and ZFS (Overview)

12.  Live Upgrade for ZFS (Planning)

13.  Creating a Boot Environment for ZFS Root Pools

14.  Live Upgrade For ZFS With Non-Global Zones Installed

Part III Appendices

A.  Troubleshooting (Tasks)

B.  Additional SVR4 Packaging Requirements (Reference)

C.  Using the Patch Analyzer When Upgrading (Tasks)

Glossary

Index

Guidelines for Selecting Slices for File Systems

When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Oracle Solaris OS. Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.

Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.

For Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.

Guidelines for Selecting a Slice for the root (/) File System

When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:

Guidelines for Selecting Slices for Mirrored File Systems

You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:

When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:


Note - If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Live Upgrade Running Veritas VxVm.


General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems

Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Live Upgrade boot environment.

For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Live Upgrade in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade.

Checking Status of Volumes

If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).

Detaching Volumes and Resynchronizing Mirrors

If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.

Resynchronization is the process of copying data from one submirror to another submirror after the following problems:

For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.

Using Solaris Volume Manager Commands

Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.

However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Live Upgrade boot environment.

Guidelines for Selecting a Slice for a Swap Volume

These guidelines contain configuration recommendations and examples for a swap slice.

Configuring Swap for the New Boot Environment

You can configure a swap slice in three ways by using the lucreate command with the -m option:

The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.

Failed Boot Environment Creation if Swap is in Use

A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.

Guidelines for Selecting Slices for Shareable File Systems

Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.

Reconfiguring a disk
Examples
For More Information
You can reslice the disk before creating the new boot environment and put the shareable file system on its own slice.
For example, if the root (/) file system, /var, and /home are on the same slice, reconfigure the disk and put /home on its own slice. When you create any new boot environments, /home is shared with the new boot environment by default.
If you want to share a directory, the directory must be split off to its own slice. The directory is then a file system that can be shared with another boot environment. You can use the lucreate command with the -m option to create a new boot environment and split a directory off to its own slice. But, the new file system cannot yet be shared with the original boot environment. You need to run the lucreate command with the -m option again to create another boot environment. The two new boot environments can then share the directory.
For example, if you wanted to upgrade from the Solaris 9 release to the Oracle Solaris 10 8/11 release and share /home, you could run the lucreate command with the -m option. You could create a Solaris 9 release with /home as a separate file system on its own slice. Then run the lucreate command with the -m option again to duplicate that boot environment. This third boot environment can then be upgraded to the Oracle Solaris 10 8/11 release. /home is shared between the Solaris 9 and Oracle Solaris 10 8/11 releases.
For a description of shareable and critical file systems, see File System Types.