Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 10 8/11 Installation Guide: Live Upgrade and Upgrade Planning Oracle Solaris 10 8/11 Information Library |
Part I Upgrading With Live Upgrade
1. Where to Find Oracle Solaris Installation Planning Information
Live Upgrade System Requirements
Live Upgrade Disk Space Requirements
Live Upgrade Requirements if Creating RAID-1 Volumes (Mirrors)
Upgrading a System With Packages or Patches
Upgrading and Patching Limitations
Guidelines for Creating File Systems With the lucreate Command
Guidelines for Selecting Slices for File Systems
Guidelines for Selecting a Slice for the root (/) File System
Guidelines for Selecting Slices for Mirrored File Systems
General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems
Guidelines for Selecting a Slice for a Swap Volume
Configuring Swap for the New Boot Environment
Customizing a New Boot Environment's Content
Synchronizing Files Between Boot Environments
Adding Files to the /etc/lu/synclist
Forcing a Synchronization Between Boot Environments
Booting Multiple Boot Environments
Live Upgrade Character User Interface
4. Using Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Live Upgrade Boot Environments (Tasks)
8. Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed
10. Live Upgrade (Command Reference)
Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool
11. Live Upgrade and ZFS (Overview)
12. Live Upgrade for ZFS (Planning)
13. Creating a Boot Environment for ZFS Root Pools
14. Live Upgrade For ZFS With Non-Global Zones Installed
B. Additional SVR4 Packaging Requirements (Reference)
When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Oracle Solaris OS. Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.
Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.
For Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.
When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:
Must be a slice from which the system can boot.
Must meet the recommended minimum size.
Can be on different physical disks or the same disk as the active root (/) file system.
Can be a Veritas Volume Manager volume (VxVM). If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:
A physical slice.
A single-slice concatenation that is included in a RAID-1 volume (mirror). The slice that contains the root (/) file system can be a RAID-1 volume.
A single-slice concatenation that is included in a RAID-0 volume. The slice that contains the root (/) file system can be a RAID-0 volume.
When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:
A physical slice in the form of /dev/dsk/cwtxdysz
A Solaris Volume Manager volume in the form of /dev/md/dsk/dnum
A Veritas Volume Manager volume in the form of /dev/vx/dsk/volume_name. If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
Note - If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Live Upgrade Running Veritas VxVm.
Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Live Upgrade boot environment.
For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Live Upgrade in Oracle Solaris 10 8/11 Installation Guide: Planning for Installation and Upgrade.
If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).
If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.
Resynchronization is the process of copying data from one submirror to another submirror after the following problems:
Submirror failures.
System crashes.
A submirror has been taken offline and brought back online.
The addition of a new submirror.
For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.
Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.
However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Live Upgrade boot environment.
These guidelines contain configuration recommendations and examples for a swap slice.
You can configure a swap slice in three ways by using the lucreate command with the -m option:
If you do not specify a swap slice, the swap slices belonging to the current boot environment are configured for the new boot environment.
If you specify one or more swap slices, these slices are the only swap slices that are used by the new boot environment. The two boot environments do not share any swap slices.
You can specify to both share a swap slice and add a new slice for swap.
The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.
In the following example, no swap slice is specified. The new boot environment contains the root (/) file system on c0t1d0s0. Swap is shared between the current and new boot environment on c0t0d0s1.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs
In the following example, a swap slice is specified. The new boot environment contains the root (/) file system on c0t1d0s0. A new swap file system is created on c0t1d0s1. No swap slice is shared between the current and new boot environment.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap
In the following example, a swap slice is added and another swap slice is shared between the two boot environments. The new boot environment contains the root (/) file system on c0t1d0s0. A new swap slice is created on c0t1d0s1. The swap slice on c0t0d0s1 is shared between the current and new boot environment.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs -m -:shared:swap \
-m -:/dev/dsk/c0t1d0s1:swap
A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.
Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.
|