JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

Part I Upgrading With Live Upgrade

1.  Where to Find Oracle Solaris Installation Planning Information

2.  Live Upgrade (Overview)

Live Upgrade Introduction

Live Upgrade Process

Creating a Boot Environment

File System Types

Copying File Systems

Creating a Boot Environment With RAID-1 Volume File Systems

Managing Volumes With Live Upgrade

Mapping Solaris Volume Manager Tasks to Live Upgrade

Examples of Using Live Upgrade to Create RAID-1 Volumes

Upgrading a Boot Environment

Auto Registration Impact for Live Upgrade

Activating a Boot Environment

Falling Back to the Original Boot Environment

Maintaining a Boot Environment

3.  Live Upgrade (Planning)

4.  Using Live Upgrade to Create a Boot Environment (Tasks)

5.  Upgrading With Live Upgrade (Tasks)

6.  Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

7.  Maintaining Live Upgrade Boot Environments (Tasks)

8.  Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed

9.  Live Upgrade Examples

Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool

10.  Live Upgrade and ZFS (Overview)

11.  Live Upgrade for ZFS (Planning)

12.  Creating a Boot Environment for ZFS Root Pools

13.  Live Upgrade for ZFS With Non-Global Zones Installed

Part III Appendices

A.  Live Upgrade Command Reference

B.  Troubleshooting (Tasks)

C.  Additional SVR4 Packaging Requirements (Reference)

D.  Using the Patch Analyzer When Upgrading (Tasks)

Glossary

Index

Live Upgrade Process

The following overview describes the tasks necessary to create a copy of the current boot environment, upgrade the copy, and switch the upgraded copy to become the active boot environment. The fallback process of switching back to the original boot environment is also described. Figure 2-1 describes this complete Live Upgrade process.

Figure 2-1 Live Upgrade Process

image:The context describes the illustration.

The following sections describe the Live Upgrade process.

  1. A new boot environment can be created on a physical slice or a logical volume:

  2. Upgrading a Boot Environment

  3. Activating a Boot Environment

  4. Falling Back to the Original Boot Environment

Creating a Boot Environment

The process of creating a boot environment provides a method of copying critical file systems from an active boot environment to a new boot environment. The disk is reorganized if necessary, file systems are customized, and the critical file systems are copied to the new boot environment.

File System Types

Live Upgrade distinguishes between two file system types: critical file systems and shareable. The following table describes these file system types.

File System Type
Description
Examples and More Information
Critical file systems
Critical file systems are required by the Oracle Solaris OS. These file systems are separate mount points in the vfstab of the active and inactive boot environments. These file systems are always copied from the source to the inactive boot environment. Critical file systems are sometimes referred to as nonshareable.
Examples are root (/), /usr, /var, or /opt.
Shareable file systems
Shareable file systems are user-defined files such as /export that contain the same mount point in the vfstab in both the active and inactive boot environments. Therefore, updating shared files in the active boot environment also updates data in the inactive boot environment. When you create a new boot environment, shareable file systems are shared by default. But you can specify a destination slice and then the file systems are copied.
/export is an example of a file system that can be shared.

For more detailed information about shareable file systems, see Guidelines for Selecting Slices for Shareable File Systems.

Swap
  • For UFS file systems, swap is a special shareable volume. Like a shareable file system, all swap slices are shared by default. But, if you specify a destination directory for swap, the swap slice is copied.
  • For ZFS file systems, swap and dump volumes are shared within the pool.

Creating RAID-1 Volumes on File Systems

Live Upgrade can create a boot environment with RAID-1 volumes (mirrors) on file systems. For an overview, see Creating a Boot Environment With RAID-1 Volume File Systems.

Copying File Systems

The process of creating a new boot environment begins by identifying an unused slice where a critical file system can be copied. If a slice is not available or a slice does not meet the minimum requirements, you need to format a new slice.

After the slice is defined, you can reconfigure the file systems on the new boot environment before the file systems are copied into the directories. You reconfigure file systems by splitting and merging them, which provides a simple way of editing the vfstab to connect and disconnect file system directories. You can merge file systems into their parent directories by specifying the same mount point. You can also split file systems from their parent directories by specifying different mount points.

After file systems are configured on the inactive boot environment, you begin the automatic copy. Critical file systems are copied to the designated directories. Shareable file systems are not copied, but are shared. The exception is that you can designate some shareable file systems to be copied. When the file systems are copied from the active to the inactive boot environment, the files are directed to the new directories. The active boot environment is not changed in any way.

For procedures to split or merging file systems
For an overview of creating a boot environment with RAID–1 volume file systems
Examples of Creating a New Boot Environment

For UFS file systems, the figures in this section illustrate various ways of creating new boot environments.

For ZFS file system information, see Chapter 10, Live Upgrade and ZFS (Overview)

The following figure shows that critical file system root (/) has been copied to another slice on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. The new boot environment is an exact duplicate with the root (/) file system on a new slice. The /swap volume and /export/home file system are shared by the active and inactive boot environments.

Figure 2-2 Creating an Inactive Boot Environment – Copying the root (/) File System

image:The context describes the illustration.

The following figure shows critical file systems that have been split and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. On that slice, the root (/) file system contains the /usr, /var, and /opt directories. In the new boot environment, the root (/) file system is split and /usr and /opt are put on separate slices. The /swap volume and /export/home file system are shared by both boot environments.

Figure 2-3 Creating an Inactive Boot Environment – Splitting File Systems

image:The context describes the illustration.

The following figure shows critical file systems that have been merged and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system, /usr, /var, and /opt with each file system on their own slice. In the new boot environment, /usr and /opt are merged into the root (/) file system on one slice. The /swap volume and /export/home file system are shared by both boot environments.

Figure 2-4 Creating an Inactive Boot Environment – Merging File Systems

image:The context describes the illustration.

Creating a Boot Environment With RAID-1 Volume File Systems

Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems encapsulated in RAID-1 volumes. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. Live Upgrade enables a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system.

A volume can group disk slices across several disks to transparently appear as a single disk to the OS. Live Upgrade is limited to creating a boot environment for the root (/) file system that contains single-slice concatenations inside a RAID-1 volume (mirror). This limitation is because the boot PROM is restricted to choosing one slice from which to boot.

Managing Volumes With Live Upgrade

When creating a boot environment, you can use Live Upgrade to manage the following tasks.

You use the lucreate command with the -m option to create a mirror, detach submirrors, and attach submirrors for the new boot environment.


Note - If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.


For more information, see the following resources:

Mapping Solaris Volume Manager Tasks to Live Upgrade

Live Upgrade manages a subset of Solaris Volume Manager tasks. The following table shows the Solaris Volume Manager components that the Live Upgrade can manage.

Table 2-1 Classes of Volumes

Term
Description
Concatenation
A RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a mirror.
Mirror
A RAID-1 volume. See RAID-1 volume.
RAID-1 volume
A class of volume that replicates data by maintaining multiple copies. A RAID-1 volume is sometimes called a mirror. A RAID-1 volume is composed of one or more RAID-0 volumes that are called submirrors.
RAID-0 volume
A class of volume that can be a stripe or a concatenation. These components are also called submirrors. A stripe or concatenation is the basic building block for mirrors.
State database
A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. The state database tracks the location and status of all known state database replicas.
State database replica
A copy of a state database. The replica ensures that the data in the database is valid.
Submirror
See RAID-0 volume.
Volume
A group of physical slices or other volumes that appear to the system as a single logical device. A volume is functionally identical to a physical disk in the view of an application or file system. In some command-line utilities, a volume is called a metadevice.

Examples of Using Live Upgrade to Create RAID-1 Volumes

The examples in this section present command syntax for creating RAID-1 volumes for a new boot environment.

Create RAID-1 Volume on Two Physical Disks

The following figure shows a new boot environment with a RAID-1 volume (mirror) that is created on two physical disks. The following command created the new boot environment and the mirror:

# lucreate -n second_disk -m /:/dev/md/dsk/d30:mirror,ufs \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d31:attach -m /:/dev/dsk/c0t2d0s0,/dev/md/dsk/d32:attach \ 
-m -:/dev/dsk/c0t1d0s1:swap -m -:/dev/dsk/c0t2d0s1:swap

This command performs the following tasks:

Figure 2-5 Create a Boot Environment and Create a Mirror

image:The context describes the illustration.
Create a Boot Environment and Use the Existing Submirror

The following figure shows a new boot environment that contains a RAID-1 volume (mirror). The following command created the new boot environment and the mirror:

# lucreate -n second_disk -m /:/dev/md/dsk/d20:ufs,mirror \ 
-m /:/dev/dsk/c0t1d0s0:detach,attach,preserve

This command performs the following tasks:

Figure 2-6 Create a Boot Environment and Use the Existing Submirror

image:The illustration provides the context.

Upgrading a Boot Environment

After you have created a boot environment, you can perform an upgrade on the boot environment. As part of that upgrade, the boot environment can contain RAID-1 volumes (mirrors) for any file systems, or it can have non-global zones installed. The upgrade does not affect any files in the active boot environment. When you are ready, you activate the new boot environment, which then becomes the current boot environment.


Note - Starting with the Oracle Solaris 10 9/10 release, the upgrade process is affected by Auto Registration. See Auto Registration Impact for Live Upgrade.


For more information, see the following resources:

The following figure shows an upgrade to an inactive boot environment.

Figure 2-7 Upgrading an Inactive Boot Environment

image:The context describes the illustration.

Rather than an upgrade, you can install a Flash Archive on a boot environment. The Flash Archive installation feature enables you to create a single reference installation of the Oracle Solaris OS on a system. This system is called the master system. Then, you can replicate that installation on a number of systems that are called clone systems. In this situation, the inactive boot environment is a clone. When you install the flash archive on a system, the archive replaces all the files on the existing boot environment as an initial installation would.

For procedures about installing a flash archive, see Installing Flash Archives on a Boot Environment.

The following figures show an installation of a flash archive on an inactive boot environment. Figure 2-8 shows a system with a single hard disk. Figure 2-9 shows a system with two hard disks.

Figure 2-8 Installing a Flash Archive on a Single Disk

image:The context describes the illustration.

Figure 2-9 Installing a Flash Archive on Two Disks

image:The context describes the illustration.

Auto Registration Impact for Live Upgrade

Starting with the Oracle Solaris 10 9/10 release, the upgrade process is affected by Auto Registration.

What Is Auto Registration?

When you install or upgrade a system, configuration data about that system is, on rebooting, automatically communicated through the existing service tag technology to the Oracle Product Registration System. This service tag data about your system is used, for example, to help Oracle enhance customer support and services. You can use this same configuration data to create and manage your own inventory of your systems.

When Does Auto Registration Affect Live Upgrade?

Auto Registration does not change Live Upgrade procedures unless you are specifically upgrading a system from a prior release to the Oracle Solaris 10 9/10 release or a later release.

Auto Registration does not change any of the following Live Upgrade procedures:

When, and only when, you are upgrading a system from a prior release to the Oracle Solaris 10 9/10 release or to a later release, you must create an Auto Registration configuration file. Then, when you upgrade that system, you must use the -k option in the luupgrade -u command and point to this configuration file.

How to Provide Auto Registration Information During an Upgrade

When, and only when, you are upgrading a prior release to the Oracle Solaris 10 9/10 release or to a later release, use this procedure to provide required Auto Registration information during the upgrade.

  1. Create a configuration file that contains your support credentials and, optionally, your proxy information.

    This file should be formatted as a list of keyword-value pairs. Include the following keywords and values, in this format, in the file:

    http_proxy=Proxy-Server-Host-Name
    http_proxy_port=Proxy-Server-Port-Number
    http_proxy_user=HTTP-Proxy-User-Name
    http_proxy_pw=HTTP-Proxy-Password
    oracle_user=My-Oracle-Support-User-Name
    oracle_pw=My-Oracle-Support-Password

    Note the following formatting rules:

    • The passwords must be in plain, not encrypted, text.

    • Keyword order does not matter.

    • Keywords can be entirely omitted if you do not want to specify a value. Or, you can retain the keyword, and its value can be left blank.


      Note - If you omit the support credentials, the registration will be anonymous.


    • Spaces in the configuration file do not matter, unless the value you want to enter should contain a space. Only http_proxy_user and http_proxy_pw values can contain a space within the value.

    • The oracle_pw value must not contain a space.

    The following example shows a sample file.

    http_proxy= webcache.central.example.COM
    http_proxy_port=8080
    http_proxy_user=webuser
    http_proxy_pw=secret1
    oracle_user=joe.smith@example.com
    oracle_pw=csdfl2442IJS
  2. Save the file.
  3. Run the luupgrade -u -k /path/filename command, including any of the other standard luupgrade command options as needed for that particular upgrade.

How to Disable Auto Registration During an Upgrade

  1. Create a configuration file or revise the content of the existing configuration file you created so that the file contains only the following line:
    autoreg=disable
  2. Save the file.
  3. Run the luupgrade -u -k /path/filename command, including any of the other standard luupgrade command options as needed for that particular upgrade.
  4. (Optional) When the Live Upgrade has completed and the system reboots, verify that the Auto Registration feature is disabled as follows.
    # /opt/ocm/ccr/bin/emCCR status
        Oracle Configuration Manager - Release: 10.3.6.0.1 - Production
        Copyright (c) 2005, 2011, Oracle and/or its affiliates.  All rights reserved.
        ------------------------------------------------------------------
        Log Directory            /opt/ocm/config_home/ccr/log
        Collector Mode           Disconnected

Activating a Boot Environment

When you are ready to switch and make the new boot environment active, you can easily activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain system files and directories are copied from the last-active boot environment to the boot environment being booted. When you reboot the system, the configuration that you installed on the new boot environment is active. The original boot environment then becomes an inactive boot environment.

For procedures about activating a boot environment, see Activating a Boot Environment. For information about synchronizing the active and inactive boot environment, see Synchronizing Files Between Boot Environments.

The following figure shows a switch after a reboot from an inactive to an active boot environment.

Figure 2-10 Activating an Inactive Boot Environment

image:The context describes the illustration.

Falling Back to the Original Boot Environment

If a failure occurs, you can quickly fall back to the original boot environment with an activation and reboot. The use of fallback takes only the time to reboot the system, which is much quicker than backing up and restoring the original. The new boot environment that failed to boot is preserved. The failure can then be analyzed. You can fall back only to the boot environment that was used by luactivate to activate the new boot environment.

The following table describes the ways you can fall back to the previous boot environment.

Problem
Action
The new boot environment boots successfully, but you are not happy with the results.
Run the luactivate command with the name of the previous boot environment and reboot.

x86 only - Starting with the Solaris 10 1/06 release, you can fall back by selecting the original boot environment that is found on the GRUB menu. The original boot environment and the new boot environment must be based on the GRUB software. Booting from the GRUB menu does not synchronize files between the old and new boot environments. For more information about synchronizing files, see Forcing a Synchronization Between Boot Environments.


The new boot environment does not boot.
Boot the fallback boot environment in single-user mode, run the luactivate command, and reboot.
You cannot boot in single-user mode.
Perform one of the following:
  • Boot from DVD or CD media or a net installation image

  • Mount the root (/) file system on the fallback boot environment

  • Run the luactivate command and reboot

For procedures to fall back, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

The following figure shows the switch that is made when you reboot to fallback.

Figure 2-11 Fallback to the Original Boot Environment

image:The context describes the illustration.

Maintaining a Boot Environment

You can also do various maintenance activities such as checking status, renaming, or deleting a boot environment. For maintenance procedures, see Chapter 7, Maintaining Live Upgrade Boot Environments (Tasks).