Solaris 10 11/06 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Part I Upgrading With Solaris Live Upgrade

This part provides an overview and instructions for using Solaris Live Upgrade to create and upgrade an inactive boot environment. The boot environment can then be switched to become the current boot environment.

Chapter 1 Where to Find Solaris Installation Planning Information

This book provides information on how to use the Solaris Live Upgrade program to upgrade the Solaris operating system. This book provides all you need to know about using Solaris Live Upgrade, but a planning book in our collection of installation documentation might be useful to read before you begin. The following references provide useful information before you upgrade your system.

Where to Find Planning and System Requirement Information

The Solaris 10 11/06 Installation Guide: Planning For Installation and Upgrade provides system requirements and high-level planning information, such as planning guidelines for file systems, and upgrade planning and much more. The following list describes the chapters in the planning book and provides links to those chapters.

Chapter Descriptions From the Planning Guide 

Reference 

This chapter describes new features in the Solaris installation programs. 

Chapter 2, What’s New in Solaris Installation, in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

This chapter provides you with information about decisions you need to make before you install or upgrade the Solaris OS. Examples are deciding when to use a network installation image or DVD media and descriptions of all the Solaris installation programs. 

Chapter 3, Solaris Installation and Upgrade (Roadmap), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

This chapter describes system requirements to install or upgrade to the Solaris OS. General guidelines for planning the disk space and default swap space allocation are also provided. Upgrade limitations are also described. 

Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

This chapter contains checklists to help you gather all of the information that you need to install or upgrade your system. This information is useful, for example, if you are performing an interactive installation. You'll have all the information in the checklist that you'll need to do an interactive installation. 

Chapter 5, Gathering Information Before Installation or Upgrade (Planning), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

These chapters provide overviews of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements related to these technologies are also included. These chapters include information about GRUB based booting, Solaris Zones partitioning technology, and RAID-1 volumes that can be created at installation. 

Part II, Understanding Installations That Relate to GRUB, Solaris Zones, and RAID-1 Volumes, in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

Chapter 2 Solaris Live Upgrade (Overview)

This chapter describes the Solaris Live Upgrade process.


Note –

This book uses the term slice, but some Solaris documentation and programs might refer to a slice as a partition.


Solaris Live Upgrade Introduction

Solaris Live Upgrade provides a method of upgrading a system while the system continues to operate. While your current boot environment is running, you can duplicate the boot environment, then upgrade the duplicate. Or, rather than upgrading, you can install a Solaris Flash archive on a boot environment. The original system configuration remains fully functional and unaffected by the upgrade or installation of an archive. When you are ready, you can activate the new boot environment by rebooting the system. If a failure occurs, you can quickly revert to the original boot environment with a simple reboot. This switch eliminates the normal downtime of the test and evaluation process.

Solaris Live Upgrade enables you to duplicate a boot environment without affecting the currently running system. You can then do the following:

Some understanding of basic system administration is necessary before using Solaris Live Upgrade. For background information about system administration tasks such as managing file systems, mounting, booting, and managing swap, see the System Administration Guide: Devices and File Systems.

Solaris Live Upgrade Process

The following overview describes the tasks necessary to create a copy of the current boot environment, upgrade the copy, and switch the upgraded copy to become the active boot environment. The fallback process of switching back to the original boot environment is also described. Figure 2–1 describes this complete Solaris Live Upgrade process.

Figure 2–1 Solaris Live Upgrade Process

The context describes the illustration.

The following sections describe the Solaris Live Upgrade process.

  1. A new boot environment can be created on a physical slice or a logical volume:

  2. Upgrading a Boot Environment

  3. Activating a Boot Environment

  4. Falling Back to the Original Boot Environment

Creating a Boot Environment

The process of creating a boot environment provides a method of copying critical file systems from an active boot environment to a new boot environment. The disk is reorganized if necessary, file systems are customized, and the critical file systems are copied to the new boot environment.

File System Types

Solaris Live Upgrade distinguishes between two file system types: critical file systems and shareable. The following table describes these file system types.

File System Type 

Description  

Examples and More Information 

Critical file systems 

Critical file systems are required by the Solaris OS. These file systems are separate mount points in the vfstab of the active and inactive boot environments. These file systems are always copied from the source to the inactive boot environment. Critical file systems are sometimes referred to as nonshareable.

Examples are root (/), /usr, /var, or /opt.

Shareable file systems 

Shareable file systems are user-defined files such as /export that contain the same mount point in the vfstab in both the active and inactive boot environments. Therefore, updating shared files in the active boot environment also updates data in the inactive boot environment. When you create a new boot environment, shareable file systems are shared by default. But you can specify a destination slice and then the file systems are copied.

/export is an example of a file system that can be shared.

For more detailed information about shareable file systems, see Guidelines for Selecting Slices for Shareable File Systems.

Swap 

Swap is a special shareable file system. Like a shareable file system, all swap slices are shared by default. But, if you specify a destination directory for swap, the swap slice is copied. 

For procedures about reconfiguring swap, see the following:  

Creating RAID-1 Volumes on File Systems

Solaris Live Upgrade can create a boot environment with RAID-1 volumes (mirrors) on file systems. For an overview, see Creating a Boot Environment With RAID-1 Volume File Systems.

Copying File Systems

The process of creating a new boot environment begins by identifying an unused slice where a critical file system can be copied. If a slice is not available or a slice does not meet the minimum requirements, you need to format a new slice.

After the slice is defined, you can reconfigure the file systems on the new boot environment before the file systems are copied into the directories. You reconfigure file systems by splitting and merging them, which provides a simple way of editing the vfstab to connect and disconnect file system directories. You can merge file systems into their parent directories by specifying the same mount point. You can also split file systems from their parent directories by specifying different mount points.

After file systems are configured on the inactive boot environment, you begin the automatic copy. Critical file systems are copied to the designated directories. Shareable file systems are not copied, but are shared. The exception is that you can designate some shareable file systems to be copied. When the file systems are copied from the active to the inactive boot environment, the files are directed to the new directories. The active boot environment is not changed in any way.

For procedures to split or merging file systems 

For an overview of creating a boot environment with RAID–1 volume file systems 

Creating a Boot Environment With RAID-1 Volume File Systems

Examples of Creating a New Boot Environment

The following figures illustrate various ways of creating new boot environments.

Figure 2–2 shows that critical file system root (/) has been copied to another slice on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. The new boot environment is an exact duplicate with the root (/) file system on a new slice. The file systems /swap and /export/home are shared by the active and inactive boot environments.

Figure 2–2 Creating an Inactive Boot Environment – Copying the root (/) File System

The context describes the illustration.

Figure 2–3 shows critical file systems that have been split and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. On that slice, the root (/) file system contains the /usr, /var, and /opt directories. In the new boot environment, the root (/) file system is split and /usr and /opt are put on separate slices. The file systems /swap and /export/home are shared by both boot environments.

Figure 2–3 Creating an Inactive Boot Environment – Splitting File Systems

The context describes the illustration.

Figure 2–4 shows critical file systems that have been merged and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system, /usr, /var, and /opt with each file system on their own slice. In the new boot environment, /usr and /opt are merged into the root (/) file system on one slice. The file systems /swap and /export/home are shared by both boot environments.

Figure 2–4 Creating an Inactive Boot Environment – Merging File Systems

The context describes the illustration.

Creating a Boot Environment With RAID-1 Volume File Systems

Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems encapsulated in RAID-1 volumes. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. Solaris Live Upgrade enables a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system.

A volume can group disk slices across several disks to transparently appear as a single disk to the OS. Solaris Live Upgrade is limited to creating a boot environment for the root (/) file system that contains single-slice concatenations inside a RAID-1 volume (mirror). This limitation is because the boot PROM is restricted to choosing one slice from which to boot.

How to Manage Volumes With Solaris Live Upgrade

When creating a boot environment, you can use Solaris Live Upgrade to manage the following tasks.

You use the lucreate command with the -m option to create a mirror, detach submirrors, and attach submirrors for the new boot environment.


Note –

If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.


For step-by-step procedures 

To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface)

For an overview of creating RAID-1 volumes when installing 

Chapter 8, Creating RAID-1 Volumes (Mirrors) During Installation (Overview), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

For in-depth information about other complex Solaris Volume Manager configurations that are not supported if you are using Solaris Live Upgrade 

Chapter 2, Storage Management Concepts, in Solaris Volume Manager Administration Guide

Mapping Solaris Volume Manager Tasks to Solaris Live Upgrade

Solaris Live Upgrade manages a subset of Solaris Volume Manager tasks. Table 2–1 shows the Solaris Volume Manager components that Solaris Live Upgrade can manage.

Table 2–1 Classes of Volumes

Term 

Description 

concatenation

A RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a mirror. 

mirror

A RAID-1 volume. See RAID-1 volume. 

RAID-1 volume

A class of volume that replicates data by maintaining multiple copies. A RAID-1 volume is sometimes called a mirror. A RAID-1 volume is composed of one or more RAID-0 volumes that are called submirrors.  

RAID-0 volume

A class of volume that can be a stripe or a concatenation. These components are also called submirrors. A stripe or concatenation is the basic building block for mirrors.  

state database

A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. The state database tracks the location and status of all known state database replicas. 

state database replica 

A copy of a state database. The replica ensures that the data in the database is valid. 

submirror

See RAID-0 volume. 

volume

A group of physical slices or other volumes that appear to the system as a single logical device. A volume is functionally identical to a physical disk in the view of an application or file system. In some command-line utilities, a volume is called a metadevice.  

Examples of Using Solaris Live Upgrade to Create RAID-1 Volumes

The following examples present command syntax for creating RAID-1 volumes for a new boot environment.

Create RAID-1 Volume on Two Physical Disks

Figure 2–5 shows a new boot environment with a RAID-1 volume (mirror) that is created on two physical disks. The following command created the new boot environment and the mirror.


# lucreate -n second_disk -m /:/dev/md/dsk/d30:mirror,ufs \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d31:attach -m /:/dev/dsk/c0t2d0s0,/dev/md/dsk/d32:attach \ 
-m -:/dev/dsk/c0t1d0s1:swap -m -:/dev/dsk/c0t2d0s1:swap

This command performs the following tasks:

Figure 2–5 Create a Boot Environment and Create a Mirror

The context describes the illustration.

Create a Boot Environment and Use the Existing Submirror

Figure 2–6 shows a new boot environment that contains a RAID-1 volume (mirror). The following command created the new boot environment and the mirror.


# lucreate -n second_disk -m /:/dev/md/dsk/d20:ufs,mirror \ 
-m /:/dev/dsk/c0t1d0s0:detach,attach,preserve

This command performs the following tasks:

Figure 2–6 Create a Boot Environment and Use the Existing Submirror

The illustration provides the context.

Upgrading a Boot Environment

After you have created a boot environment, you can perform an upgrade on the boot environment. As part of that upgrade, the boot environment can contain RAID-1 volumes (mirrors) for any file systems. The upgrade does not affect any files in the active boot environment. When you are ready, you activate the new boot environment, which then becomes the current boot environment.

For procedures about upgrading a boot environment 

Chapter 5, Upgrading With Solaris Live Upgrade (Tasks)

For an example of upgrading a boot environment with a RAID–1 volume file system 

Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (Command-Line Interface)

Figure 2–7 shows an upgrade to an inactive boot environment.

Figure 2–7 Upgrading an Inactive Boot Environment

The context describes the illustration.

Rather than an upgrade, you can install a Solaris Flash archive on a boot environment. The Solaris Flash installation feature enables you to create a single reference installation of the Solaris OS on a system. This system is called the master system. Then, you can replicate that installation on a number of systems that are called clone systems. In this situation, the inactive boot environment is a clone. When you install the Solaris Flash archive on a system, the archive replaces all the files on the existing boot environment as an initial installation would.

For procedures about installing a Solaris Flash archive, see Installing Solaris Flash Archives on a Boot Environment.

The following figures show an installation of a Solaris Flash archive on an inactive boot environment. Figure 2–8 shows a system with a single hard disk. Figure 2–9 shows a system with two hard disks.

Figure 2–8 Installing a Solaris Flash Archive on a Single Disk

The context describes the illustration.

Figure 2–9 Installing a Solaris Flash Archive on Two Disks

The context describes the illustration.

Activating a Boot Environment

When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain system files and directories are copied from the last-active boot environment to the boot environment being booted. When you reboot the system, the configuration that you installed on the new boot environment is active. The original boot environment then becomes an inactive boot environment.

For procedures about activating a boot environment 

Activating a Boot Environment

For information about synchronizing the active and inactive boot environment 

Synchronizing Files Between Boot Environments

Figure 2–10 shows a switch after a reboot from an inactive to an active boot environment.

Figure 2–10 Activating an Inactive Boot Environment

The context describes the illustration.

Falling Back to the Original Boot Environment

If a failure occurs, you can quickly fall back to the original boot environment with an activation and reboot. The use of fallback takes only the time to reboot the system, which is much quicker than backing up and restoring the original. The new boot environment that failed to boot is preserved. The failure can then be analyzed. You can only fall back to the boot environment that was used by luactivate to activate the new boot environment.

You fall back to the previous boot environment the following ways:

Problem 

Action 

The new boot environment boots successfully, but you are not happy with the results. 

Run the luactivate command with the name of the previous boot environment and reboot.


x86 only –

Starting with the Solaris 10 1/06 release, you can fall back by selecting the original boot environment that is found on the GRUB menu. The original boot environment and the new boot environment must be based on the GRUB software. Booting from the GRUB menu does not synchronize files between the old and new boot environments. For more information about synchronizing files, see Forcing a Synchronization Between Boot Environments.


The new boot environment does not boot. 

Boot the fallback boot environment in single-user mode, run the luactivate command, and reboot.

You cannot boot in single-user mode. 

Perform one of the following: 

  • Boot from DVD or CD media or a net installation image

  • Mount the root (/) file system on the fallback boot environment

  • Run the luactivate command and reboot

For procedures to fall back, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Figure 2–11 shows the switch that is made when you reboot to fallback.

Figure 2–11 Fallback to the Original Boot Environment

The context describes the illustration.

Maintaining a Boot Environment

You can also do various maintenance activities such as checking status, renaming, or deleting a boot environment. For maintenance procedures, see Chapter 7, Maintaining Solaris Live Upgrade Boot Environments (Tasks).

Chapter 3 Solaris Live Upgrade (Planning)

This chapter provides guidelines and requirements for review before installing and using Solaris Live Upgrade. You also should review general information about upgrading in Upgrade Planning in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade. This chapter contains the following sections:

Solaris Live Upgrade Requirements

Before you install and use Solaris Live Upgrade, become familiar with these requirements.

Solaris Live Upgrade System Requirements

Solaris Live Upgrade is included in the Solaris software. You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 11/06 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 11/06 release.

Table 3–1 lists releases that are supported by Solaris Live Upgrade.

Table 3–1 Supported Solaris Releases

Your Current Release 

Compatible Upgrade Release 

Solaris 8 OS 

Solaris 8, 9, or any Solaris 10 release 

Solaris 9 OS 

Solaris 9 or any Solaris 10 release 

Solaris 10 OS 

Any Solaris 10 release 

Installing Solaris Live Upgrade

You can install the Solaris Live Upgrade packages by using the following:

Be aware that the following patches might need to be installed for the correct operation of Solaris Live Upgrade.

Description 

For More Information 

Caution: Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.


x86 only –

If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not executable: 
</sbin/biosdev>.
ERROR: One or more patches required 
by Live Upgrade has not been installed.

The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site.

If you are running the Solaris 8 or 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com.

For instructions about installing the Solaris Live Upgrade software, see Installing Solaris Live Upgrade.

Required Packages

If you have problems with Solaris Live Upgrade, you might be missing packages. In the following table, check that your OS has the listed packages , which are required to use Solaris Live Upgrade.

For the Solaris 10 release:

For information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.

Table 3–2 Required Packages for Solaris Live Upgrade

Solaris 8 Release 

Solaris 9 Release 

Solaris 10 Release 

SUNWadmap 

SUNWadmap 

SUNWadmap 

SUNWadmc 

SUNWadmc 

SUNWadmlib-sysid 

SUNWlibC 

SUNWadmfw 

SUNWadmr 

SUNWbzip 

SUNWlibC 

SUNWlibC 

SUNWgzip 

SUNWgzip 

For Solaris 10 3/05 only: SUNWgzip

SUNWj2rt 


Note –

The SUNWj2rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


SUNWj2rt  


Note –

The SUNWj2rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


SUNWj5rt 


Note –

The SUNWj5rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


To check for packages on your system, type the following command.


% pkginfo package_name

Solaris Live Upgrade Disk Space Requirements

Follow general disk space requirements for an upgrade. See Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.

To estimate the file system size that is needed to create a boot environment, start the creation of a new boot environment. The size is calculated. You can then abort the process.

The disk on the new boot environment must be able to serve as a boot device. Some systems restrict which disks can serve as a boot device. Refer to your system's documentation to determine if any boot restrictions apply.

The disk might need to be prepared before you create the new boot environment. Check that the disk is formatted properly:

Solaris Live Upgrade Requirements if Creating RAID-1 Volumes (Mirrors)

Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems that are RAID-1 volumes (mirrors). Solaris Live Upgrade does not implement the full functionality of Solaris Volume Manager, but does require the following components of Solaris Volume Manager.

Table 3–3 Required Components for Solaris Live Upgrade and RAID-1 Volumes

Requirement  

Description 

For More Information 

You must create at least one state database and at least three state database replicas.  

A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. When a state database is copied, the replica protects against data loss from single points of failure. 

For information about creating a state database, see Chapter 6, State Database (Overview), in Solaris Volume Manager Administration Guide.

Solaris Live Upgrade supports only a RAID-1 volume (mirror) with single-slice concatenations on the root (/) file system.

A concatenation is a RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a RAID-1 volume 

A RAID—1 volume can be comprised of a maximum of three concatenations.  

For guidelines about creating mirrored file systems, see Guidelines for Selecting Slices for Mirrored File Systems.

Upgrading a System With Packages or Patches

You can use Solaris Live Upgrade to add patches and packages to a system. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command. When you use luupgrade command, you can also use a Solaris Flash archive to install patches or packages.


Caution – Caution –

When upgrading and adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software during an upgrade to fail or alter the active boot environment.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


Type of Installation 

Description 

For More Information 

Adding patches to a boot environment  

Create a new boot environment and use the luupgrade command with the -t option.

To Add Patches to an Operating System Image on a Boot Environment (Command-Line Interface).

Adding packages to a boot environment 

Use the luupgrade command with the -p option.

To Add Packages to an Operating System Image on a Boot Environment (Command-Line Interface)

Using Solaris Live Upgrade to install a Solaris Flash archive 

An archive contains a complete copy of a boot environment with new packages and patches already included. This copy can be installed on multiple systems. 

 

Guidelines for Creating File Systems With the lucreate Command

The lucreate -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. When using the -m option to create file systems, follow these guidelines:

Guidelines for Selecting Slices for File Systems

When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.

Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.

For Solaris Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.

Guidelines for Selecting a Slice for the root (/) File System

When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:

Guidelines for Selecting Slices for Mirrored File Systems

You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:

When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:


Note –

If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Solaris Live Upgrade Running Veritas VxVm.


General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems

Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.

For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.

Checking Status of Volumes

If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).

Detaching Volumes and Resynchronizing Mirrors

If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.

Resynchronization is the process of copying data from one submirror to another submirror after the following problems:

For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.

Using Solaris Volume Manager Commands

Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.

However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Solaris Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.

Guidelines for Selecting a Slice for a Swap File System

These guidelines contain configuration recommendations and examples for a swap slice.

Configuring Swap for the New Boot Environment

You can configure a swap slice in three ways by using the lucreate command with the -m option:

The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.

Failed Boot Environment Creation if Swap is in Use

A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.

Guidelines for Selecting Slices for Shareable File Systems

Solaris Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.

Reconfiguring a disk 

Examples 

For More Information 

You can reslice the disk before creating the new boot environment and put the shareable file system on its own slice.  

For example, if the root (/) file system, /var, and /home are on the same slice, reconfigure the disk and put /home on its own slice. When you create any new boot environments, /home is shared with the new boot environment by default.

format(1M)

If you want to share a directory, the directory must be split off to its own slice. The directory is then a file system that can be shared with another boot environment. You can use the lucreate command with the -m option to create a new boot environment and split a directory off to its own slice. But, the new file system cannot yet be shared with the original boot environment. You need to run the lucreate command with the -m option again to create another boot environment. The two new boot environments can then share the directory.

For example, if you wanted to upgrade from the Solaris 9 release to the Solaris 10 11/06 release and share /home, you could run the lucreate command with the -m option. You could create a Solaris 9 release with /home as a separate file system on its own slice. Then run the lucreate command with the -m option again to duplicate that boot environment. This third boot environment can then be upgraded to the Solaris 10 11/06 release. /home is shared between the Solaris 9 and Solaris 10 11/06 releases.

For a description of shareable and critical file systems, see File System Types.

Customizing a New Boot Environment's Content

When you create a new boot environment, some directories and files can be excluded from a copy to the new boot environment. If you have excluded a directory, you can also reinstate specified subdirectories or files under the excluded directory. These subdirectories or files that have been restored are then copied to the new boot environment. For example, you could exclude from the copy all files and directories in /etc/mail, but include all files and directories in /etc/mail/staff. The following command copies the staff subdirectory to the new boot environment.


# lucreate -n second_disk -x /etc/mail -y /etc/mail/staff

Caution – Caution –

Use the file-exclusion options with caution. Do not remove files or directories that are required by the system.


The following table lists the lucreate command options for removing and restoring directories and files.

How Specified? 

Exclude Options  

Include Options 

Specify the name of the directory or file 

-x exclude_dir

-y include_dir

Use a file that contains a list 

-f list_filename

-z list_filename

-Y list_filename

-z list_filename

For examples of customizing the directories and files when creating a boot environment, see To Create a Boot Environment and Customize the Content (Command-Line Interface).

Synchronizing Files Between Boot Environments

When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain critical system files and directories might be copied from the last-active boot environment to the boot environment being booted. Those files and directories that have changed are copied.

Adding Files to the /etc/lu/synclist

Solaris Live Upgrade checks for critical files that have changed. If these files' content is not the same in both boot environments, they are copied from the active boot environment to the new boot environment. Synchronizing is meant for critical files such as /etc/passwd or /etc/group files that might have changed since the new boot environment was created.

The /etc/lu/synclist file contains a list of directories and files that are synchronized. In some instances, you might want to copy other files from the active boot environment to the new boot environment. You can add directories and files to /etc/lu/synclist if necessary.

Adding files not listed in the /etc/lu/synclist could cause a system to become unbootable. The synchronization process only copies files and creates directories. The process does not remove files and directories.

The following example of the /etc/lu/synclist file shows the standard directories and files that are synchronized for this system.


/var/mail                    OVERWRITE
/var/spool/mqueue            OVERWRITE
/var/spool/cron/crontabs     OVERWRITE
/var/dhcp                    OVERWRITE
/etc/passwd                  OVERWRITE
/etc/shadow                  OVERWRITE
/etc/opasswd                 OVERWRITE
/etc/oshadow                 OVERWRITE
/etc/group                   OVERWRITE
/etc/pwhist                  OVERWRITE
/etc/default/passwd          OVERWRITE
/etc/dfs                     OVERWRITE
/var/log/syslog              APPEND
/var/adm/messages            APPEND

Examples of directories and files that might be appropriate to add to the synclist file are the following:


/var/yp                    OVERWRITE
/etc/mail                  OVERWRITE
/etc/resolv.conf           OVERWRITE
/etc/domainname            OVERWRITE

The synclist file entries can be files or directories. The second field is the method of updating that occurs on the activation of the boot environment. You can choose from three methods to update files:

Forcing a Synchronization Between Boot Environments

The first time you boot from a newly created boot environment, Solaris Live Upgrade synchronizes the new boot environment with the boot environment that was last active. After this initial boot and synchronization, Solaris Live Upgrade does not perform a synchronization unless requested.

You might want to force a synchronization if you are maintaining multiple versions of the Solaris OS. You might want changes in files such as email or passwd/group to be in the boot environment you are activating to. If you force a synchronization, Solaris Live Upgrade checks for conflicts between files that are subject to synchronization. When the new boot environment is booted and a conflict is detected, a warning is issued and the files are not synchronized. Activation can be completed successfully, despite such a conflict. A conflict can occur if you make changes to the same file on both the new boot environment and the active boot environment. For example, you make changes to the /etc/passwd file on the original boot environment. Then you make other changes to /etc/passwd file on the new boot environment. The synchronization process cannot choose which file to copy for the synchronization.


Caution – Caution –

Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.


x86: Activating a Boot Environment With the GRUB Menu

Starting with the Solaris 10 1/06 release, a GRUB boot menu provides an optional method of switching between boot environments. The GRUB menu is an alternative to activating with the luactivate command or the Activate menu.

Task 

Information 

To activate a boot environment with the GRUB menu 

x86: To Activate a Boot Environment With the GRUB Menu (Command-Line Interface)

To fall back to the original boot environment with a GRUB menu 

x86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu

For overview and planning information for GRUB 

Chapter 6, GRUB Based Booting for Solaris Installation, in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade

For a complete GRUB overview and system administration tasks 

System Administration Guide: Basic Administration

Using Solaris Live Upgrade From a Remote System

When viewing the character user interface remotely, such as over a tip line, you might need to set the TERM environment variable to VT220. Also, when using the Common Desktop Environment (CDE), set the value of the TERM variable to dtterm, rather than xterm.

Chapter 4 Using Solaris Live Upgrade to Create a Boot Environment (Tasks)

This chapter explains how to install Solaris Live Upgrade, use the menus, and to create a boot environment. This chapter contains the following sections:

About Solaris Live Upgrade Interfaces

You can run Solaris Live Upgrade with a character user interface (CUI) or the command-line interface (CLI). Procedures for both the CUI and CLI are provided in the following sections.

Interface Type 

Description 

Character user interface (CUI) 

The CUI does not provide access to all features of Solaris Live Upgrade. The CUI does not run in multibyte locales and 8-bit locales.  

Command-line interface (CLI) 

The CLI procedures in this document cover the basic uses of the Solaris Live Upgrade commands. See Chapter 10, Solaris Live Upgrade (Command Reference) for a list of commands and also see the appropriate, associated man pages for more options to use with these commands.

Using Solaris Live Upgrade Menus (CUI)

Figure 4–1 Solaris Live Upgrade Main Menu

The screen capture shows Solaris Live Upgrade tasks and
the Enter and Help keys.

Navigation through the menus of the Solaris Live Upgrade character user interface requires that you use arrow keys and function keys. Use arrow keys to navigate up and down before making a selection or to place the cursor in a field. To perform a task, use the function keys. At the bottom of the menu, you see black rectangles that represent function keys on the keyboard. For example, the first black rectangle represents F1 and the second black rectangle represents F2. Rectangles that are active contain a word that represents a task, such as Save. The Configuration menu notes the function key number plus the task, rather than a rectangle.

In the following procedures, you might be asked to press a function key. If your function keys do not properly map to the function keys on the Solaris Live Upgrade menus, use Control-F plus the appropriate number.

Task Map: Installing Solaris Live Upgrade and Creating Boot Environments

Table 4–1 Task Map: Using Solaris Live Upgrade

Task  

Description 

For Instructions 

Install patches on your system 

Solaris Live Upgrade requires a limited set of patch revisions 

Installing Patches Needed by Solaris Live Upgrade

Install Solaris Live Upgrade packages 

Install packages on your OS 

Installing Solaris Live Upgrade

Start Solaris Live Upgrade 

Start the Solaris Live Upgrade main menu 

Starting and Stopping Solaris Live Upgrade (Character User Interface)

Create a boot environment 

Copy and reconfigure file systems to an inactive boot environment 

Creating a New Boot Environment

Installing Solaris Live Upgrade

You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 11/06 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 11/06 release.

Some patches might be required. Install these patches before you install Solaris Live Upgrade packages. For more information, see the following:

Installing Patches Needed by Solaris Live Upgrade

Description 

For More Information 


Caution – Caution –

Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.



x86 only –

If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not 
executable: </sbin/biosdev>.
ERROR: One or more patches required by 
Live Upgrade has not been installed.

The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site.

If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available at http://sunsolve.sun.com.

ProcedureTo Install Required Patches

  1. From the SunSolveSM web site, obtain the list of patches.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Install the patches with the patchadd command.


    # patchadd path_to_patches
    
  4. Reboot the system if necessary. Certain patches require a reboot to be effective.

    x86 only: Rebooting the system is required or Solaris Live Upgrade fails.

ProcedureTo Install Solaris Live Upgrade With the pkgadd Command

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Install the packages in the following order.


    # pkgadd -d path_to_packages SUNWlur SUNWluu   
    
    path_to_packages

    Specifies the absolute path to the software packages.

  3. Verify that the package has been installed successfully.


    # pkgchk -v SUNWlur SUNWluu
    

ProcedureTo Install Solaris Live Upgrade With the Solaris Installation Program


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Insert the Solaris Operating System DVD or Solaris Software - 2 CD.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Run the installer for the media you are using.

    • If you are using the Solaris Operating System DVD, change directories to the installer and run the installer.

      • For SPARC based systems:


        # cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers
        # ./liveupgrade20
        
      • For x86 based systems:


        # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
        # ./liveupgrade20
        

      The Solaris installation program GUI is displayed.

    • If you are using the Solaris Software - 2 CD, run the installer.


      % ./installer
      

      The Solaris installation program GUI is displayed.

  4. From the Select Type of Install panel, click Custom.

  5. On the Locale Selection panel, click the language to be installed.

  6. Choose the software to install.

    • For DVD, on the Component Selection panel, click Next to install the packages.

    • For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click on the other software choices to deselect them.

  7. Follow the directions on the Solaris installation program panels to install the software.

Starting and Stopping Solaris Live Upgrade (Character User Interface)

This procedure starts and stops the Solaris Live Upgrade menu program.

ProcedureTo Start Solaris Live Upgrade Menus


Note –

When viewing the character user interface remotely, such as over a tip line, you might need to set the TERM environment variable to VT220. Also, when using the Common Desktop Environment (CDE), set the value of the TERM variable to dtterm, rather than xterm.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/lu
    

    The Solaris Live Upgrade main menu is displayed.

    Figure 4–2 Solaris Live Upgrade Main Menu

    The screen capture shows Solaris Live Upgrade tasks and
the Enter and Help keys.

ProcedureTo Stop Solaris Live Upgrade Menus

    From the main menu, select Exit.

Creating a New Boot Environment

Creating a boot environment provides a method of copying critical file systems from the active boot environment to a new boot environment. The CUI's Create menu and Configuration submenu, and the lucreate command enable reorganizing a disk if necessary, customizing file systems, and copying the critical file systems to the new boot environment.

Before file systems are copied to the new boot environment, they can be customized so that critical file system directories are either merged into their parent directory or split from their parent directory. User-defined (shareable) file systems are shared between boot environments by default. But shareable file systems can be copied if needed. Swap, which is a shareable file system, can be split and merged also. For an overview of critical and shareable file systems, see File System Types.

ProcedureTo Create a Boot Environment (Character User Interface)

  1. From the main menu, select Create.

    The system displays the Create a Boot Environment submenu.

  2. Type the name of the active boot environment (if necessary) and the new boot environment and confirm. You are only required to type the name of the active boot environment the first time you create a boot environment.

    The boot environment name can be no longer than 30 characters, can contain only alphanumeric characters, and can contain no multibyte characters.


    Name of Current Boot Environment:    solaris8
    Name of New Boot Environment:   solaris10 
    
  3. To save your changes, press F3.

    The configuration menu appears.

    Figure 4–3 Solaris Live Upgrade Configuration Menu

    The context describes the screen capture.

    The configuration menu contains the following parts:

    • The original boot environment is located at the top of the screen. The boot environment to be created is at the bottom.

    • The Device field contains the following information.

      • The name of a disk device of the form /dev/dsk/cwtxdysz.

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum.

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name.

      • The area for selecting a critical file system is blank until you select a critical file system. The critical file systems such as /usr, /var, or /opt can be split or merged with the root (/) file system.

      • Shareable file systems such as /export or swap are displayed in the Device field. These file systems contain the same mount point in both the source and target boot environments. Swap is shared by default, but you can also split and merge (add and remove) swap slices.

        For an overview of critical and shareable file systems, see File System Types.

    • The FS_Type field enables you to change file system type. The file system type can be one of the following:

      • vxfs, which indicates a Veritas file system

      • swap, which indicates a swap file system

      • ufs, which indicates a UFS file system

  4. (Optional) The following tasks can be done at any time:

    • To print the information onscreen to an ASCII file, press F5.

    • To scroll through the file system list, press Control-X.

      You can then switch between the file systems of the active and new boot environment and scroll.

    • To exit the Configuration menu at any time, press F6.

      • If you are in the Configuration menu, changes are not saved and file systems are not altered.

      • If you are in a Configuration submenu, you return to the Configuration menu.

  5. Select an available slice by pressing F2.

    The Choices menu displays available slices on the system for the field where the cursor is placed. The menu displays a device field and a file system FS_Type field.

    1. Use the arrow keys to place the cursor in a field to select a slice or file system type.

      • When you place your cursor in the Device field, all free slices are displayed. For the root (/) file system, the Choices menu only displays free slices that meet the root (/) file system limitations. See Guidelines for Selecting a Slice for the root (/) File System.

      • When you place your cursor in the FS_Type field, all available file system types are displayed.

      • Slices in bold can be selected for the current file system. The size of the slice is estimated by adding the size of the file system plus 30 percent to accommodate an upgrade.

      • Slices not in bold are too small to support the given file system. To reslice a disk, see Step 6.

    2. Press Return to choose a slice.

      The slice appears in the Device field or the file system type changes in the FS_Type field.

  6. (Optional) If available slices do not meet the minimum requirements, to reslice any available disks, press F4.

    The Solaris Live Upgrade Slice Configuration menu appears.

    The format(1M) command runs, which enables you to create new slices. Follow the screen to create a new slice.

    To navigate through this menu, use the arrow keys to move between the Device field and FS_Type field. The Size (Mbytes) field is automatically completed as the devices are selected.

    1. To free a device, press Control-D.

      The slice is now available and appears on the Choices menu.

    2. To return to the Configuration menu, press F3.

  7. (Optional) Splitting critical file systems puts the file systems on separate mount points. To split a file system, do the following:

    (To merge file systems, see Step 8).

    1. Select the file system to split.

      You can split or exclude file systems such as /usr, /var, or /opt from their parent directory.


      Note –

      When creating file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from making invalid configurations on critical file systems. For example, you could enter a lucreate command that would create separate file systems for root (/) and /kernel, which is an invalid division of the root (/) file system.


    2. Press F8.

    3. Type the file system name for the new boot environment, for example:


      Enter the directory that will be a separate file system 
      on the new boot environment: /opt
      

      When the new file system is verified, a new line is added to the screen.

    4. To return to the Configuration menu, press F3.

      The Configuration menu is displayed.

  8. (Optional) Merging puts the file systems on the same mount point. To merge a file system into its parent directory:

    (To split file systems, see Step 7.)

    1. Select the file system to merge.

      You can merge file systems such as /usr, /var, or /opt into their parent directory.

    2. Press F9.

      The file systems that will be combined are displayed, for example:


      /opt will be merged into /. 
    3. Press Return.

    4. To return to the Configuration menu, press F3.

      The Configuration menu is displayed.

  9. (Optional) Decide if you want to add or remove swap slices.

    • If you want to split a swap slice and put swap on a new slice, continue with Step 10.

    • If you want to remove a swap slice, continue with Step 11.

  10. (Optional) To split a swap slice, do the following:

    1. In the Device field, select the swap slice that you want to split.

    2. Press F8.

    3. At the prompt, type:


      Enter the directory that will be a separate filesystem on 
      the new BE: swap
      
    4. Press F2 Choice.

      The Choice menu lists the available slices for swap.

    5. Select the slice to put swap on.

      The slice appears in the Device field and you have a new slice for swap.

  11. (Optional) To remove a swap slice, do the following:

    1. In the Device field, select the swap slice that you are removing.

    2. Press F9.

    3. At the prompt, type y.


      Slice /dev/dsk/c0t4d0s0 will not be swap partition. 
      Please confirm? [y, n]: y
      

      The swap slice no longer exists.

  12. Decide if you want to create the boot environment now or schedule the creation for later:

    • Press F3 to create the new boot environment now.

      The configuration is saved and you exit the configuration screen. The file systems are copied, the boot environment is made bootable, and an inactive boot environment is created.

      Creating a boot environment might take an hour or more, depending on your system configuration. The Solaris Live Upgrade main menu is then displayed.

    • If you want to schedule the creation for a later time, type y, then the start time, and an email address, as in this example.


      Do you want to schedule the copy? y
      Enter the time in 'at' format to schedule create: 8:15 PM
      Enter the address to which the copy log should be mailed: someone@anywhere.com

      You are notified of the completion by email.

      For information about time formats, see the at(1) man page.

      You can schedule only one job at a time.

    After the creation is complete, the inactive boot environment is ready to be upgraded. See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).

ProcedureTo Create a Boot Environment for the First Time (Command-Line Interface)

The lucreate command that is used with the -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into the one file system that is specified by the -m option. If you specify the -m option twice, you create two file systems. When using the -m option to create file systems, follow these guidelines:

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To create the new boot environment, type:


    # lucreate [-A 'BE_description'] -c BE_name \
     -m mountpoint:device[,metadevice]:fs_options [-m ...] -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -c BE_name

    Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.

    The default name is chosen according to the following criteria:

    • If the physical boot device can be determined, then the base name of the physical boot device is used to name the current boot environment.

      For example, if the physical boot device is /dev/dsk/c0t0d0s0, then the current boot environment is given the name c0t0d0s0.

    • If the physical boot device cannot be determined, then names from the uname command with the -s and -r options are combined to produce the name.

      For example, if the uname -s returns the OS name of SunOS and the uname -r returns the release name of 5.9, then the name SunOS5.9 is given to the current boot environment.

    • If both of the above cannot determine the name, then the name current is used to name the current boot environment.


    Note –

    If you use the -c option after the first boot environment creation, the option is ignored or an error message is displayed.

    • If the name specified is the same as the current boot environment name, the option is ignored.

    • If the name specified is different than the current boot environment name, then an error message is displayed and the creation fails. The following example shows a boot environment name that causes an error message.


      # lucurr 
      c0t0d0s0
      # lucreate -c /dev/dsk/c1t1d1s1 -n newbe -m /:/dev/dsk/c1t1d1s1:ufs
      ERROR: current boot environment name is c0t0d0s0: cannot change
      name using <-c c1t1d1s1>

    -m mountpoint:device[,metadevice]:fs_options [-m ...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–1 Creating a Boot Environment (Command Line)

In this example, the active boot environment is named first_disk. The mount points for the file systems are noted by using the -m option. Two file systems are created, root (/) and /usr. The new boot environment is named second_disk. A description, mydescription, is associated with the name second_disk. Swap, in the new boot environment second_disk, is automatically shared from the source, first_disk.


# lucreate -A 'mydescription' -c first_disk  -m /:/dev/dsk/c0t4d0s0:ufs \
-m /usr:/dev/dsk/c0t4d0s3:ufs  -n second_disk

ProcedureTo Create a Boot Environment and Merge File Systems (Command-Line Interface)


Note –

You can use the lucreate command with the -m option to specify which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into one file system. If you specify the -m option twice, you create two file systems.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucreate  -A 'BE_description' \ 
    -m mountpoint:device[,metadevice]:fs_options \ 
    -m [...] -m mountpoint:merged:fs_options -n BE_name
    
    -A BE_description

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–2 Creating a Boot Environment and Merging File Systems (Command-Line Interface)

In this example, the file systems on the current boot environment are root (/), /usr, and /opt. The /opt file system is combined with its parent file system /usr. The new boot environment is named second_disk. A description, mydescription, is associated with the name second_disk.


# lucreate -A 'mydescription' -c first_disk \
 -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
 -m /usr/opt:merged:ufs -n second_disk

ProcedureTo Create a Boot Environment and Split File Systems (Command-Line Interface)


Note –

When creating file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from making invalid configurations on critical file systems. For example, you could enter an lucreate command that would create separate file systems for root (/) and /kernel, which is an an invalid division of the root (/) file system.


When splitting a directory into multiple mount points, hard links are not maintained across file systems. For example, if /usr/stuff1/file is hard linked to /usr/stuff2/file, and /usr/stuff1 and /usr/stuff2 are split into separate file systems, the link between the files no longer exists. lucreate issues a warning message and a symbolic link is created to replace the lost hard link.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucreate [-A 'BE_description'] \
     -m mountpoint:device[,metadevice]:fs_options \ 
    -m mountpoint:device[,metadevice]:fs_options -n new_BE
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.


Example 4–3 Creating a Boot Environment and Splitting File Systems (Command-Line Interface)

In this example, the preceding command splits the root (/) file system over multiple disk slices in the new boot environment. Assume a source boot environment that has /usr, /var, and /opt on root (/): /dev/dsk/c0t0d0s0 /.

On the new boot environment, separate /usr, /var, and /opt, mounting these file systems on their own slices, as follows:

/dev/dsk/c0t1d0s0 /

/dev/dsk/c0t1d0s1 /var

/dev/dsk/c0t1d0s7 /usr

/dev/dsk/c0t1d0s5 /opt

A description, mydescription, is associated with the boot environment name second_disk.


# lucreate -A 'mydescription' -c first_disk \
 -m /:/dev/dsk/c0t1d0s0:ufs -m /usr:/dev/dsk/c0t1d0s7:ufs  \ 
-m /var:/dev/dsk/c0t1d0s1:ufs -m /opt:/dev/dsk/c0t1d0s5:ufs \ 
-n second_disk

When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


ProcedureTo Create a Boot Environment and Reconfiguring Swap (Command-Line Interface)

Swap slices are shared between boot environments by default. By not specifying swap with the -m option, your current and new boot environment share the same swap slices. If you want to reconfigure the new boot environment's swap, use the -m option to add or remove swap slices in the new boot environment.


Note –

The swap slice cannot be in use by any boot environment except the current boot environment or if the -s option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment, whether it is a swap, UFS, or any other file system.

You can create a boot environment with the existing swap slices and then edit the vfstab file after the creation.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucreate  [-A 'BE_description'] \
     -m mountpoint:device[,metadevice]:fs_options \ 
    -m -:device:swap -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique.

    The new boot environment is created with swap moved to a different slice or device.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–4 Creating a Boot Environment and Reconfiguring Swap (Command-Line Interface)

In this example, the current boot environment contains root (/) on /dev/dsk/c0t0d0s0 and swap is on /dev/dsk/c0t0d0s1. The new boot environment copies root (/) to /dev/dsk/c0t4d0s0 and uses both /dev/dsk/c0t0d0s1 and /dev/dsk/c0t4d0s1 as swap slices. A description, mydescription, is associated with the boot environment name second_disk.


# lucreate -A 'mydescription' -c first_disk \ 
-m /:/dev/dsk/c0t4d0s0:ufs -m -:/dev/dsk/c0t0d0s1:swap \ 
-m -:/dev/dsk/c0t4d0s1:swap -n second_disk 

These swap assignments are effective only after booting from second_disk. If you have a long list of swap slices, use the -M option. See To Create a Boot Environment and Reconfigure Swap by Using a List (Command-Line Interface).


ProcedureTo Create a Boot Environment and Reconfigure Swap by Using a List (Command-Line Interface)

If you have a long list of swap slices, create a swap list. lucreate uses this list for the swap slices in the new boot environment.


Note –

The swap slice cannot be in use by any boot environment except the current boot environment or if the -s option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment, whether the swap slice contains a swap, UFS, or any other file system.


  1. Create a list of swap slices to be used in the new boot environment. The location and name of this file is user defined. In this example, the content of the /etc/lu/swapslices file is a list of devices and slices:


    -:/dev/dsk/c0t3d0s2:swap
    -:/dev/dsk/c0t3d0s2:swap
    -:/dev/dsk/c0t4d0s2:swap
    -:/dev/dsk/c0t5d0s2:swap
    -:/dev/dsk/c1t3d0s2:swap
    -:/dev/dsk/c1t4d0s2:swap
    -:/dev/dsk/c1t5d0s2:swap
  2. Type:


    # lucreate  [-A 'BE_description'] \
     -m mountpoint:device[,metadevice]:fs_options \
    -M slice_list  -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -M slice_list

    List of -m options, which are collected in the file slice_list. Specify these arguments in the format that is specified for -m. Comment lines, which begin with a hash mark (#), are ignored. The -M option is useful when you have a long list of file systems for a boot environment. Note that you can combine -m and -M options. For example, you can store swap slices in slice_list and specify root (/) and /usr slices with -m.

    The -m and -M options support the listing of multiple slices for a particular mount point. In processing these slices, lucreate skips any unavailable slices and selects the first available slice.

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–5 Create a Boot Environment and Reconfiguring Swap By Using a List (Command-Line Interface)

In this example, swap in the new boot environment is the list of slices that are noted in the /etc/lu/swapslices file. A description, mydescription, is associated with the name second_disk.


# lucreate -A 'mydescription' -c first_disk \ 
-m /:/dev/dsk/c02t4d0s0:ufs -m /usr:/dev/dsk/c02t4d0s1:ufs \ 
-M /etc/lu/swapslices -n second_disk 

ProcedureTo Create a Boot Environment and Copy a Shareable File System (Command-Line Interface)

If you want a shareable file system to be copied to the new boot environment, specify the mount point to be copied with the -m option. Otherwise, shareable file systems are shared by default, and maintain the same mount point in the vfstab file. Any updating that is applied to the shareable file system is available to both boot environments.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the boot environment.


    # lucreate [-A 'BE_description'] \ 
    -m mountpoint:device[,metadevice]:fs_options \ 
    -m mountpoint:device[,metadevice]:fs_options  -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique.

    When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–6 Creating a Boot Environment and Copying a Shareable File System (Command-Line Interface)

In this example, the current boot environment contains two file systems, root (/) and /home. In the new boot environment, the root (/) file system is split into two file systems, root (/) and /usr. The /home file system is copied to the new boot environment. A description, mydescription, is associated with the boot environment name second_disk.


# lucreate -A 'mydescription' -c first_disk \ 
-m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s3:ufs \
-m /home:/dev/dsk/c0t4d0s4:ufs -n second_disk

ProcedureTo Create a Boot Environment From a Different Source (Command-Line Interface)

The lucreate command creates a boot environment that is based on the file systems in the active boot environment. If you want to create a boot environment based on a boot environment other than the active boot environment, use lucreate with the -s option.


Note –

If you activate the new boot environment and need to fall back, you boot back to the boot environment that was last active, not the source boot environment.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the boot environment.


    # lucreate [-A 'BE_description'] -s source_BE_name 
    -m mountpoint:device[,metadevice]:fs_options -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -s source_BE_name

    Specifies the source boot environment for the new boot environment. The source would not be the active boot environment.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–7 Creating a Boot Environment From a Different Source (Command-Line Interface)

In this example, a boot environment is created that is based on the root (/) file system in the source boot environment named third_disk. Third_disk is not the active boot environment. A description, mydescription, is associated with the new boot environment named second_disk.


# lucreate -A 'mydescription' -s third_disk \ 
-m /:/dev/dsk/c0t4d0s0:ufs  -n second_disk

ProcedureTo Create an Empty Boot Environment for a Solaris Flash Archive (Command-Line Interface)

The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When using the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems that are specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the empty boot environment.


    # lucreate -A 'BE_name' -s - \ 
    -m mountpoint:device[,metadevice]:fs_options -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -s -

    Specifies that an empty boot environment be created.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.


Example 4–8 Creating an Empty Boot Environment for a Solaris Flash Archive (Command-Line Interface)

In this example, a boot environment is created but contains no file systems. A description, mydescription, is associated with the new boot environment that is named second_disk.


# lucreate -A 'mydescription' -s - \ 
-m /:/dev/dsk/c0t1d0s0:ufs  -n second_disk

When creation of the empty boot environment is complete, a flash archive can be installed and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).

For an example of creating and populating an empty boot environment, see Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive (Command-Line Interface).

The following image shows the creation of an empty boot environment.

The context describes the illustration.

ProcedureTo Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface)

When you create a boot environment, Solaris Live Upgrade uses Solaris Volume Manager technology to create RAID-1 volumes. When creating a boot environment, you can use Solaris Live Upgrade to manage the following tasks.

To use the mirroring capabilities of Solaris Live Upgrade, you must create a state database and a state database replica. A state database stores information about disk about the state of your Solaris Volume Manager configuration.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To create the new boot environment, type:


    # lucreate [-A 'BE_description']  \ 
    -m mountpoint:device[,metadevice]:fs_options [-m...] \ 
    -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name BE_name. The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following types of file systems and keywords:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device.

        • mirror creates a RAID–1 volume or mirror on the specified device. In subsequent -m options, you must specify attach to attach at least one concatenation to the new mirror. The specified device must be correctly named. For example, a logical device name of /dev/md/dsk/d10 can serve as a mirror name. For more information about naming devices, see Overview of Solaris Volume Manager Components in Solaris Volume Manager Administration Guide.

        • detach removes a concatenation from a volume that is associated with a specified mount point. The volume does not need to be specified.

        • attach attaches a concatenation to the mirror that is associated with a specified mount point. The physical disk slice that is specified is made into a single device concatenation for attaching to the mirror. To specify a concatenation to attach to a disk, you append a comma and the name of that concatenation to the device name. If you omit the comma and the concatenation name, lucreate selects a free volume for the concatenation.

          lucreate allows you to create only concatenations that contain a single physical slice. This command allows you to attach up to three concatenations to a mirror.

        • preserve saves the existing file system and its content. This keyword enables you to bypass the copying process that copies the content of the source boot environment. Saving the content enables a quick creation of the new boot environment. For a particular mount point, you can use preserve with only one physical device. When you use preserve, lucreate checks that the device's content is suitable for a specified file system. This check is limited and cannot guarantee suitability.

          The preserve keyword can be used with both a physical slice and a Solaris Volume Manager volume.

          • If you use the preserve keyword when the UFS file system is on a physical slice, the content of the UFS file system is saved on the slice. In the following example of the -m option, the preserve keyword saves the content of the physical device c0t0d0s0 as the file system for the mount point for the root (/) file system.


            -m /:/dev/dsk/c0t0d0s0:preserve,ufs
            
          • If you use the preserve keyword when the UFS file system is on a volume, the contents of the UFS file system are saved on the volume.

            In the following example of the -m option, the preserve keyword saves the contents of the RAID-1 volume (mirror) d10 as the file system for the mount point for the root (/) file system.


            -m /:/dev/md/dsk/d10:preserve,ufs
            

            In the following example of the -m option, a RAID-1 volume (mirror) d10 is configured as the file system for the mount point for the root (/) file system. The single-slice concatenation d20 is detached from its current mirror. d20 is attached to mirror d10. The root (/) file system is preserved on submirror d20.


            -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/md/dsk/d20:detach,attach,preserve
            
    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–9 Creating a Boot Environment With a Mirror and Specifying Devices (Command Line)

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:attach \ 
-m /:/dev/dsk/c0t1c0s0,/dev/md/dsk/d2:attach -n another_disk


Example 4–10 Creating a Boot Environment With a Mirror and Not Specifying a Submirror Name (Command Line Interface)

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0:attach \ 
-m /:/dev/dsk/c0t1d0s0:attach -n another_disk

When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).



Example 4–11 Creating a Boot Environment and Detaching a Submirror (Command Line)

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:detach,attach,preserve \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d2:attach -n another_disk

This example can be abbreviated as in the following example. The physical and logical device names are shortened. The specifiers for the submirrors d1 and d2 are omitted.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \ 
-m /:/dev/dsk/c0t1d0s0:attach -n another_disk

When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).



Example 4–12 Creating a Boot Environment, Detaching a Submirror, and Saving Its Contents (Command Line)

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d20:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \ 
-n another_disk

When the creation of the new boot environment is complete, the boot environment can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).



Example 4–13 Creating a Boot Environment With Two Mirrors (Command Line Interface)

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:attach \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d2:attach \ 
-m /opt:/dev/md/dsk/d11:ufs,mirror \ 
-m /opt:/dev/dsk/c2t0d0s1,/dev/md/dsk/d3:attach \ 
-m /opt:/dev/dsk/c3t1d0s1,/dev/md/dsk/d4:attach -n another_disk

When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


ProcedureTo Create a Boot Environment and Customize the Content (Command-Line Interface)

The content of the file system on the new boot environment can be modified by using the following options. Directories and files are not copied to the new boot environment.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To create the new boot environment, type:


    # lucreate -m mountpoint:device[,metadevice]:fs_options [-m ...]  \ 
    [-x exclude_dir] [-y include] \
    [-Y include_list_file] \
    [-f exclude_list_file]\  
    [-z filter_list] [-I] -n BE_name
    
    -m mountpoint:device[,metadevice]:fs_options [-m ...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap file system. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).

    -x exclude_dir

    Excludes files and directories by not copying them to the new boot environment. You can use multiple instances of this option to exclude more than one file or directory.

    exclude_dir is the name of the directory or file.

    -y include_dir

    Copies directories and files that are listed to the new boot environment. This option is used when you have excluded a directory, but want to restore individual subdirectories or files.

    include_dir is the name of the subdirectory or file to be included.

    -Y list_filename

    Copies directories and files from a list to the new boot environment. This option is used when you have excluded a directory, but want to restore individual subdirectories or files.

    • list_filename is the full path to a file that contains a list.

    • The list_filename file must contain one file per line.

    • If a line item is a directory, all subdirectories and files beneath that directory are included. If a line item is a file, only that file is included.

    -f list_filename

    Uses a list to exclude directories and files by not copying them to the new boot environment.

    • list_filename is the full path to a file that contains a list.

    • The list_filename file must contain one file per line.

    -z list_filename

    Uses a list to copy directories and files to the new boot environment. Each file or directory in the list is noted with a plus “+” or minus “-”. A plus indicates an included file or directory and the minus indicates an excluded file or directory.

    • list_filename is the full path to a file that contains a list.

    • The list_filename file must contain one file per line. A space must follow the plus or minus before the file name.

    • If a line item is a directory and is indicated with a + (plus), all subdirectories and files beneath that directory are included. If a line item is a file and is indicated with a + (plus), only that file is included.

    -I

    Overrides the integrity check of system files. Use this option with caution.

    To prevent you from removing important system files from a boot environment, lucreate runs an integrity check. This check examines all files that are registered in the system package database and stops the boot environment creation if any files are excluded. Use of this option overrides this integrity check. This option creates the boot environment more quickly, but might not detect problems.

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–14 Creating a Boot Environment and Excluding Files (Command Line Interface)

In this example, the new boot environment is named second_disk. The source boot environment contains one file system, root (/). In the new boot environment, the /var file system is split from the root (/) file system and put on another slice. The lucreate command configures a UFS file system for the mount points root (/) and /var. Also, two /var mail files, root and staff are not copied to the new boot environment. Swap is automatically shared between the source and the new boot environment.


# lucreate -n second_disk \ 
-m /:/dev/dsk/c0t1d0s0:ufs -m /var/mail:/dev/dsk/c0t2d0s0:ufs  \  
-x /var/mail/root -x /var/mail/staff


Example 4–15 Creating a Boot Environment and Excluding and Including Files (Command Line Interface)

In this example, the new boot environment is named second_disk. The source boot environment contains one file system for the OS, root (/). The source also contains a file system that is named /mystuff. lucreate configures a UFS file system for the mount points root (/) and /mystuff. Only two directories in /mystuff are copied to the new boot environment: /latest and /backup. Swap is automatically shared between the source and the new boot environment.


# lucreate -n second_disk \ 
-m /:/dev/dsk/c01t0d0s0:ufs -m /mystuff:/dev/dsk/c1t1d0s0:ufs  \  
-x /mystuff -y /mystuff/latest -y /mystuff/backup

Chapter 5 Upgrading With Solaris Live Upgrade (Tasks)

This chapter explains how to use Solaris Live Upgrade to upgrade and activate an inactive boot environment. This chapter contains the following sections:

You can use Solaris Live Upgrade with menus or by using the command-line interface (CLI). Procedures are documented for both interfaces. These procedures do not exhaust the possibilities for using Solaris Live Upgrade. For more information about commands, see Chapter 10, Solaris Live Upgrade (Command Reference) and the appropriate man pages, which more fully document CLI options.

Task Map: Upgrading a Boot Environment

Table 5–1 Task Map: Upgrading With Solaris Live Upgrade

Task  

Description 

For Instructions 

Either upgrade a boot environment or install a Solaris Flash archive. 

  • Upgrade the inactive boot environment with an OS image.

  • Install a Solaris Flash archive on an inactive boot environment.

Activate an inactive boot environment. 

Makes changes effective and switches the inactive boot environment to active . 

Activating a Boot Environment

(optional) Switch back if a failure occurs when activating. 

Reactivates to the original boot environment if a failure occurs. 

Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

Upgrading a Boot Environment

Use the Upgrade menu or luupgrade command to upgrade a boot environment. This section provides the procedure for upgrading an inactive boot environment from files that are located on the following media:

Guidelines for Upgrading

When you upgrade a boot environment with the latest OS, you do not affect the active boot environment. The new files merge with the inactive boot environment critical file systems, but shareable file systems are not changed.

Rather than upgrading, if you have created a Solaris Flash archive, you could install the archive on an inactive boot environment. The new files overwrite critical file systems of the inactive boot environment, but shareable file systems are not changed. See Installing Solaris Flash Archives on a Boot Environment.

You can upgrade an inactive boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. The slice that is chosen for the root (/) file system must be a single-slice concatenation that is included in a RAID–1 volume (mirror). For procedures about creating a boot environment with mirrored file systems, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).


Note –

If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.


Upgrading a System With Packages or Patches

You can use Solaris Live Upgrade to add patches and packages to a system. Solaris Live Upgrade creates a copy of the currently running system. This new boot environment can be upgraded or you can add packages or patches. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command.


Caution – Caution –

When adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or alter the active boot environment during an upgrade.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


Table 5–2 Upgrading a Boot Environment With Packages and Patches

Type of Installation 

Description 

For More Information 

Adding patches to a boot environment.  

Create a new boot environment and use the luupgrade command with the -t option.

To Add Patches to an Operating System Image on a Boot Environment (Command-Line Interface)

Adding packages to a boot environment. 

Use the luupgrade command with the -p option.

To Add Packages to an Operating System Image on a Boot Environment (Command-Line Interface)

ProcedureTo Upgrade an Operating System Image on a Boot Environment (Character User Interface)

To upgrade by using this procedure, you must use a DVD or a combined installation image. For an installation with CDs, you must use the procedure To Upgrade an Operating System Image From Multiple CDs (Command-Line Interface).


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. From the Solaris Live Upgrade main menu, select Upgrade.

    The Upgrade menu screen is displayed.

  2. Type the new boot environment's name.

  3. Type the path to where the Solaris installation image is located.

    Installation Media Type 

    Description 

    Network File System 

    Specify the path to the network file system where the installation image is located.  

    Local file 

    Specify the path to the local file system where the installation image is located. 

    Local tape 

    Specify the local tape device and the position on the tape where the installation image is located. 

    Local device, DVD, or CD 

    Specify the local device and the path to the installation image. 

    • SPARC: If you are using a DVD or a CD, type the path to that disc, as in this example:


      /cdrom/cdrom0/s0/Solaris_10/s0
      
    • If you have a combined image on the network, type the path to the network file system as in this example:


      /net/installmachine/export/Solaris_10/os_image
      
  4. To upgrade, press F3.

    When the upgrade is completed, the main menu is displayed.

ProcedureTo Upgrade an Operating System Image on a Boot Environment (Command-Line Interface)

To upgrade by using this procedure, you must use a DVD or a combined installation image. If the installation requires more than one CD, you must use the procedure To Upgrade an Operating System Image From Multiple CDs (Command-Line Interface).

  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Indicate the boot environment to upgrade and the path to the installation software by typing:


    # luupgrade -u -n BE_name -s os_image_path
    
    -u

    Upgrades an operating system image on a boot environment

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image


Example 5–1 Upgrading an OS Image on a Boot Environment From DVD Media (Command-Line Interface)

In this example, the second_disk boot environment is upgraded by using DVD media. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu
# luupgrade -u -n second_disk -s /cdrom/cdrom0/s0 


Example 5–2 Upgrading an OS Image on a Boot Environment From a Network Installation Image (Command-Line Interface)

In this example, the second_disk boot environment is upgraded. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu# luupgrade -u -n second_disk \ 
-s /net/installmachine/export/Solaris_10/OS_image 

ProcedureTo Upgrade an Operating System Image From Multiple CDs (Command-Line Interface)

Because the operating system image resides on more than one CD, you must use this upgrade procedure. Use the luupgrade command with the -i option to install any additional CDs.

  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Indicate the boot environment to upgrade and the path to the installation software by typing:


    # luupgrade -u -n BE_name -s os_image_path
    
    -u

    Upgrades an operating system image on a boot environment

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image

  4. When the installer is finished with the contents of the first CD, insert the second CD.

  5. This step is identical to the previous step, but the -u option is replaced by the -i option. Also, choose to run the installer on the second CD with menus or with text.

    • This command runs the installer on the second CD with menus.


      # luupgrade -i -n BE_name -s os_image_path
      
    • This command runs the installer on the second CD with text and requires no user interaction.


      # luupgrade -i -n BE_name -s os_image_path -O '-nodisplay -noconsole'
      
    -i

    Installs additional CDs. The software looks for an installation program on the specified medium and runs that program. The installer program is specified with -s.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image.

    -O '-nodisplay -noconsole'

    (Optional) Runs the installer on the second CD in text mode and requires no user interaction.

  6. Repeat Step 4 and Step 5 for each CD that you want to install.

    The boot environment is ready to be activated. See Activating a Boot Environment.


Example 5–3 SPARC: Upgrading an Operating System Image From Multiple CDs (Command-Line Interface)

In this example, the second_disk boot environment is upgraded and the installation image is on two CDs: the Solaris Software - 1 and the Solaris Software - 2 CDs. The -u option determines if sufficient space for all the packages is on the CD set. The -O option with the -nodisplay and -noconsole options prevents the character user interface from displaying after the reading of the second CD. If you use these options, you are not prompted to type information. Omit these options to display the interface.

Install the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu

Insert the Solaris Software - 1 CD and type:

Insert the Solaris Software - 2 CD and type the following.


# luupgrade -i -n second_disk -s /cdrom/cdrom0 -O '-nodisplay \ 
-noconsole'
Repeat this step for each CD that you need.

Repeat the previous step for each CD that you want to install.


ProcedureTo Add Packages to an Operating System Image on a Boot Environment (Command-Line Interface)

In the following procedure, packages are removed from and added to a new boot environment.


Caution – Caution –

When you are upgrading. adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or can alter the active boot environment.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To remove a package or set of packages from a new boot environment, type:


    #  luupgrade -P -n second_disk package-name
    
    -P

    Indicates to remove the named package or packages from the boot environment

    -n BE_name

    Specifies the name of the boot environment where the package is to be removed

    package-name

    Specifies the names of the packages to be removed. Separate multiple package names with spaces.

  3. To add a package or a set of packages to the new boot environment, type:


    # luupgrade -p -n second_disk -s /path-to-packages package-name
    
    -p

    Indicates to add packages to the boot environment.

    -n BE_name

    Specifies the name of the boot environment where the package is to be added.

    -s path-to-packages

    Specifies the path to a directory that contains the package or packages that are to be added.

    package-name

    Specifies the names of the package or packages to be added. Separate multiple package names with a space.


Example 5–4 Adding packages to an Operating System Image on a Boot Environment (Command-Line Interface)

In this example, packages are removed then added to the second_disk boot environment.


# luupgrade -P -n second_disk SUNWabc SUNWdef SUNWghi
# luupgrade -p -n second_disk -s /net/installmachine/export/packages \
SUNWijk SUNWlmn SUNWpkr

ProcedureTo Add Patches to an Operating System Image on a Boot Environment (Command-Line Interface)

In the following procedure, patches are removed from and added to a new boot environment.


Caution – Caution –

When you are adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or can alter the active boot environment.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To remove a patch or set of patches from a new boot environment, type:


    # luupgrade -T -n second_disk patch_name
    
    -T

    Indicates to remove the named patch or patches from the boot environment.

    -n BE_name

    Specifies the name of the boot environment where the patch or patches are to be removed.

    patch-name

    Specifies the names of the patches to be removed. Separate multiple patch names with spaces.

  3. To add a patch or a set of patches to the new boot environment, type the following command.


    # luupgrade -t -n second_disk -s /path-to-patches patch-name
    
    -t

    Indicates to add patches to the boot environment.

    -n BE_name

    Specifies the name of the boot environment where the patch is to be added.

    -s path-to-patches

    Specifies the path to the directory that contains the patches that are to be added.

    patch-name

    Specifies the names of the patch or patches that are to be added. Separate multiple patch names with a space.


Example 5–5 Adding Patches to an Operating System Image on a Boot Environment (Command-Line Interface)

In this example, patches are removed then added to the second_disk boot environment .


# luupgrade -T -n second_disk 222222-01
# luupgrade -t -n second_disk -s /net/installmachine/export/packages \
333333-01 4444444-01

ProcedureTo Obtain Information on Packages Installed on a Boot Environment (Command-Line Interface)

The follow procedure checks the integrity of the packages installed on the new boot environment.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To check the integrity of the newly installed packages on the new boot environment, type:


    #  luupgrade -C -n second_disk -O "-v" package-name
    
    -C

    Indicates to run the pkgchk command on the named packages

    -n BE_name

    Specifies the name of the boot environment where the check is to be performed

    -O

    Passes the options directly to the pkgchk command

    package-name

    Specifies the names of the packages to be checked. Separate multiple package names with spaces. If package names are omitted, the check is done on all packages in the specified boot environment.

    -v

    Specifies to run the command in verbose mode


Example 5–6 Checking the Integrity of Packages on a Boot Environment (Command-Line Interface)

In this example, the packages SUNWabc, SUNWdef, and SUNWghi are checked to make sure they were installed properly and are not damaged.


# luupgrade -C -n second_disk SUNWabc SUNWdef SUNWghi

Upgrading by Using a JumpStart Profile

You can create a JumpStart profile to use with Solaris Live Upgrade. If you are familiar with the custom JumpStart program, this is the same profile that custom JumpStart uses. The following procedures enable you to create a profile, test the profile, and install by using the luupgrade command with the -j option.


Caution – Caution –

When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:


For more information see the following:

ProcedureTo Create a Profile to be Used by Solaris Live Upgrade

This procedure shows you how to create a profile for use with Solaris Live Upgrade. You can use this profile to upgrade an inactive boot environment by using the luupgrade command with the -j option.

For procedures to use this profile, see the following sections:

  1. Use a text editor to create a text file.

    Name the file descriptively. Ensure that the name of the profile reflects how you intend to use the profile to install the Solaris software on a system. For example, you might name this profile upgrade_Solaris_10.

  2. Add profile keywords and values to the profile.

    Only the upgrade keywords in the following tables can be used in a Solaris Live Upgrade profile.

    The following table lists the keywords you can use with the Install_type keyword values of upgrade or flash_install.

    Keywords for an Initial Archive Creation 

    Description 

    Reference 

    (Required) Install_type

    Defines whether to upgrade the existing Solaris environment on a system or install a Solaris Flash archive on the system. Use the following values with this keyword: 

    • upgrade for an upgrade

    • flash_install for a Solaris Flash installation

    • flash_update for a Solaris Flash differential installation

    For a description of all the values for this keyword, see install_type Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Required for a Solaris Flash archive) archive_location

    Retrieves a Solaris Flash archive from a designated location.  

    For a list of values that can be used with this keyword, see archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) cluster (adding or deleting clusters)

    Designates whether a cluster is to be added or deleted from the software group that is to be installed on the system.  

    For a list of values that can be used with this keyword, see cluster Profile Keyword (Adding Software Groups) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) geo

    Designates the regional locale or locales that you want to install on a system or to add when upgrading a system.  

    For a list of values that can be used with this keyword, see geo Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) local_customization

    Before you install a Solaris Flash archive on a clone system, you can create custom scripts to preserve local configurations on the clone system. The local_customization keyword designates the directory where you have stored these scripts. The value is the path to the script on the clone system.

    For information about predeployment and postdeployment scripts, see Creating Customization Scripts in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation).

    (Optional) locale

    Designates the locale packages you want to install or add when upgrading.  

    For a list of values that can be used with this keyword, see locale Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) package

    Designates whether a package is to be added to or deleted from the software group that is to be installed on the system.  

    For a list of values that can be used with this keyword, see package Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    The following table lists the keywords you can use with the Install_type keyword value flash_update.

    Keywords for a Differential Archive Creation 

    Description 

    Reference 

    (Required) Install_type

    Defines the installation to install a Solaris Flash archive on the system. The value for a differential archive is flash_update.

    For a description of all the values for this keyword, see install_type Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Required) archive_location

    Retrieves a Solaris Flash archive from a designated location.  

    For a list of values that can be used with this keyword, see archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) forced_deployment

    Forces the installation of a Solaris Flash differential archive onto a clone system that is different than the software expects. If you use forced_deployment, all new files are deleted to bring the clone system to the expected state. If you are not certain that you want files to be deleted, use the default, which protects new files by stopping the installation.

    For more information about this keyword, see forced_deployment Profile Keyword (Installing Solaris Flash Differential Archives) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) local_customization

    Before you install a Solaris Flash archive on a clone system, you can create custom scripts to preserve local configurations on the clone system. The local_customization keyword designates the directory where you have stored these scripts. The value is the path to the script on the clone system.

    For information about predeployment and postdeployment scripts, see Creating Customization Scripts in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation).

    (Optional) no_content_check

    When installing a clone system with a Solaris Flash differential archive, you can use the no_content_check keyword to ignore file-by-file validation. File-by-file validation ensures that the clone system is a duplicate of the master system. Avoid using this keyword unless you are sure the clone system is a duplicate of the original master system.

    For more information about this keyword, see no_content_check Profile Keyword (Installing Solaris Flash Archives) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) no_master_check

    When installing a clone system with a Solaris Flash differential archive, you can use the no_master_check keyword to ignore a check of files. Clone system files are not checked. A check would ensure the clone was built from the original master system. Avoid using this keyword unless you are sure the clone system is a duplicate of the original master system.

    For more information about this keyword, see no_master_check Profile Keyword (Installing Solaris Flash Archives) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

  3. Save the profile in a directory on the local system.

  4. Ensure that root owns the profile and that the permissions are set to 644.

  5. Test the profile (optional).

    For a procedure to test the profile, see To Test a Profile to Be Used by Solaris Live Upgrade.


Example 5–7 Creating a Solaris Live Upgrade Profile

In this example, a profile provides the upgrade parameters. This profile is to be used to upgrade an inactive boot environment with the Solaris Live Upgrade luupgrade command and the -u and -j options. This profile adds a package and a cluster. A regional locale and additional locales are also added to the profile. If you add locales to the profile, make sure that you have created a boot environment with additional disk space.

# profile keywords         profile values
# ----------------         -------------------
  install_type             upgrade
  package                  SUNWxwman add
  cluster                  SUNWCacc add
  geo                      C_Europe
  locale                   zh_TW
  locale                   zh_TW.BIG5
  locale                   zh_TW.UTF-8
  locale                   zh_HK.UTF-8
  locale                   zh_HK.BIG5HK
  locale                   zh
  locale                   zh_CN.GB18030
  locale                   zh_CN.GBK
  locale                   zh_CN.UTF-8


Example 5–8 Creating a Solaris Live Upgrade Profile to Install a Differential Archive

The following example of a profile is to be used by Solaris Live Upgrade to install a differential archive on a clone system. Only files that are specified by the differential archive are added, deleted, or changed. The Solaris Flash archive is retrieved from an NFS server. Because the image was built by the original master system, the clone system is not checked for a valid system image. This profile is to be used with the Solaris Live Upgrade luupgrade command and the -u and -j options.

# profile keywords         profile values
# ----------------         -------------------
 install_type              flash_update
 archive_location          nfs installserver:/export/solaris/archive/solarisarchive
 no_master_check

To use the luupgrade command to install the differential archive, see To Install a Solaris Flash Archive With a Profile (Command-Line Interface).


ProcedureTo Test a Profile to Be Used by Solaris Live Upgrade

After you create a profile, use the luupgrade command to test the profile. By looking at the installation output that is generated by luupgrade, you can quickly determine if a profile works as you intended.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Test the profile.


    # luupgrade -u -n BE_name -D -s os_image_path -j profile_path
    
    -u

    Upgrades an operating system image on a boot environment.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -D

    luupgrade command uses the selected boot environment's disk configuration to test the profile options that are passed with the -j option.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -j profile_path

    Path to a profile that is configured for an upgrade. The profile must be in a directory on the local machine.


Example 5–9 Testing a Profile by Using Solaris Live Upgrade

In the following example, the profile is named Flash_profile. The profile is successfully tested on the inactive boot environment that is named second_disk.


# luupgrade -u -n u1b08 -D -s /net/installsvr/export/u1/combined.u1wos \
 -j /var/tmp/flash_profile
Validating the contents of the media /net/installsvr/export/u1/combined.u1wos.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains Solaris version 10.
Locating upgrade profile template to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE second_disk.
Determining packages to install or upgrade for BE second_disk.
Simulating the operating system upgrade of the BE second_disk.
The operating system upgrade simulation is complete.
INFORMATION: var/sadm/system/data/upgrade_cleanup contains a log of the
upgrade operation.
INFORMATION: var/sadm/system/data/upgrade_cleanup contains a log of
cleanup operations required.
The Solaris upgrade of the boot environment second_disk is complete.

You can now use the profile to upgrade an inactive boot environment.


ProcedureTo Upgrade With a Profile by Using Solaris Live Upgrade (Command-Line Interface)

This procedure provides step-by-step instructions for upgrading an OS by using a profile.

If you want to install a Solaris Flash archive by using a profile, see To Install a Solaris Flash Archive With a Profile (Command-Line Interface).

If you added locales to the profile, make sure that you have created a boot environment with additional disk space.


Caution – Caution –

When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:


  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Create a profile.

    See To Create a Profile to be Used by Solaris Live Upgrade for a list of upgrade keywords that can be used in a Solaris Live Upgrade profile.

  4. Type:


    # luupgrade -u -n BE_name -s os_image_path -j profile_path
    
    -u

    Upgrades an operating system image on a boot environment.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -j profile_path

    Path to a profile. The profile must be in a directory on the local machine. For information about creating a profile, see To Create a Profile to be Used by Solaris Live Upgrade.

    The boot environment is ready to be activated.


Example 5–10 Upgrading a Boot Environment by Using a Custom JumpStart Profile (Command-Line Interface)

In this example, the second_disk boot environment is upgraded by using a profile. The -j option is used to access the profile. The boot environment is then ready to be activated. To create a profile, see To Create a Profile to be Used by Solaris Live Upgrade. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu
# luupgrade -u -n second_disk \ 
-s /net/installmachine/export/solarisX/OS_image \ 
-j /var/tmp/profile 

Installing Solaris Flash Archives on a Boot Environment

This section provides the procedure for using Solaris Live Upgrade to install Solaris Flash archives. Installing a Solaris Flash archive overwrites all files on the new boot environment except for shared files. Archives are stored on the following media:

Note the following issues with installing and creating a Solaris Flash archive.

Description 

Example 


Caution – Caution –

When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:

  • archive_location keyword in a profile

  • luupgrade command with -s, -a, -j, and -J options


For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive.  


Caution – Caution –

A Solaris Flash archive cannot be properly created when a non-global zone is installed. The Solaris Flash feature is not compatible with the Solaris Zones feature. If you create a Solaris Flash archive in a non-global zone or create an archive in a global zone that has non-global zones installed, the resulting archive does not install properly when the archive is deployed.


 

Description 

For More Information 

For examples of the correct syntax for paths that are associated with archive storage. 

See archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

To use the Solaris Flash installation feature, you install a master system and create the Solaris Flash archive.  

For more information about creating an archive, see Chapter 3, Creating Solaris Flash Archives (Tasks), in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation).

ProcedureTo Install a Solaris Flash Archive on a Boot Environment (Character User Interface)

  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. From the Solaris Live Upgrade main menu, select Flash.

    The Flash an Inactive Boot Environment menu is displayed.

  3. Type the name of the boot environment where you want to install the Solaris Flash archive and the location of the installation media:


    Name of Boot Environment: Solaris_10
    Package media: /net/install-svr/export/Solaris_10/latest
    
  4. Press F1 to add an archive.

    An Archive Selection submenu is displayed.


    Location            - Retrieval Method
    <No Archives added> - Select ADD to add archives

    This menu enables you to build a list of archives. To add or remove archives, proceed with the following steps.

    1. To add an archive to the menu, press F1.

      A Select Retrieval Method submenu is displayed.


      HTTP
      NFS
      Local File
      Local Tape
      Local Device
    2. On the Select Retrieval Method menu, select the location of the Solaris Flash archive.

      Media Selected 

      Prompt 

      HTTP 

      Specify the URL and proxy information that is needed to access the Solaris Flash archive. 

      NFS 

      Specify the path to the network file system where the Solaris Flash archive is located. You can also specify the archive file name. 

      Local file 

      Specify the path to the local file system where the Solaris Flash archive is located. 

      Local tape 

      Specify the local tape device and the position on the tape where the Solaris Flash archive is located. 

      Local device 

      Specify the local device, the path to the Solaris Flash archive, and the type of file system on which the Solaris Flash archive is located.  

      A Retrieval submenu is displayed, similar to the following example, which depends on the media you selected.


      NFS Location: 
    3. Type the path to the archive, as in the following example.


      NFS Location: host:/path/to archive.flar
      
    4. Press F3 to add the archive to the list.

    5. (Optional) To remove an archive from the menu, press F2.

    6. When the list contains the archives that you want to install, press F6 to exit.

  5. Press F3 to install one or more archives.

    The Solaris Flash archive is installed on the boot environment. All files on the boot environment are overwritten, except for shareable files.

    The boot environment is ready for activation. See To Activate a Boot Environment (Character User Interface).

ProcedureTo Install a Solaris Flash Archive on a Boot Environment (Command-Line Interface)

  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Type:


    # luupgrade -f -n BE_name -s os_image_path -a archive
    
    -f

    Indicates to install an operating system from a Solaris Flash archive.

    -n BE_name

    Specifies the name of the boot environment that is to be installed with an archive.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -a archive

    Path to the Solaris Flash archive when the archive is available on the local file system. The operating system image versions that are specified with the -s option and the -a option must be identical.


Example 5–11 Installing Solaris Flash Archives on a Boot Environment (Command-Line Interface)

In this example, an archive is installed on the second_disk boot environment. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris10 11/06 releases. All files are overwritten on second_disk except shareable files. The pkgadd command adds the Solaris Live upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu
# luupgrade -f -n second_disk \ 
-s /net/installmachine/export/Solaris_10/OS_image \ 
-a /net/server/archive/10 

The boot environment is ready to be activated.


ProcedureTo Install a Solaris Flash Archive With a Profile (Command-Line Interface)

This procedure provides the steps to install a Solaris Flash archive or differential archive by using a profile.

If you added locales to the profile, make sure that you have created a boot environment with additional disk space.

  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Create a profile.

    See To Create a Profile to be Used by Solaris Live Upgrade for a list of keywords that can be used in a Solaris Live Upgrade profile.

  4. Type:


    # luupgrade -f -n BE_name -s os_image_path -j profile_path
    
    -f

    Indicates to install an operating system from a Solaris Flash archive.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -j profile_path

    Path to a JumpStart profile that is configured for a flash installation. The profile must be in a directory on the local machine. The -s option's operating system version and the Solaris Flash archive operating system version must be identical.

    The boot environment is ready to be activated.


Example 5–12 Install a Solaris Flash archive on a Boot Environment With a Profile (Command-Line Interface)

In this example, a profile provides the location of the archive to be installed.

# profile keywords         profile values
# ----------------         -------------------
 install_type              flash_install
 archive_location          nfs installserver:/export/solaris/flasharchive/solarisarchive
 

After creating the profile, you can run the luupgrade command and install the archive. The -j option is used to access the profile. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu
# luupgrade -f -n second_disk \ 
-s /net/installmachine/export/solarisX/OS_image \ 
-j /var/tmp/profile 

The boot environment is then ready to be activated. To create a profile, see To Create a Profile to be Used by Solaris Live Upgrade.


ProcedureTo Install a Solaris Flash Archive With a Profile Keyword (Command-Line Interface)

This procedure enables you to install a Solaris Flash archive and use the archive_location keyword at the command line rather than from a profile file. You can quickly retrieve an archive without the use of a profile file.

  1. Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Type:


    # luupgrade -f -n BE_name -s os_image_path -J 'archive_location path-to-profile'
    
    -f

    Specifies to upgrade an operating system from a Solaris Flash archive.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -J 'archive_location path-to-profile'

    Specifies the archive_location profile keyword and the path to the JumpStart profile. The -s option's operating system version and the Solaris Flash archive operating system version must be identical. For the keyword values, see archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.

    The boot environment is ready to be activated.


Example 5–13 Installing a Solaris Flash Archive By Using a Profile Keyword (Command-Line Interface)

In this example, an archive is installed on the second_disk boot environment. The -J option and the archive_location keywords are used to retrieve the archive. All files are overwritten on second_disk except shareable files. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlur SUNWluu
# luupgrade -f -n second_disk \ 
-s /net/installmachine/export/solarisX/OS_image \ 
-J 'archive_location http://example.com/myflash.flar' 

Activating a Boot Environment

Activating a boot environment makes it bootable on the next reboot of the system. You can also switch back quickly to the original boot environment if a failure occurs on booting the newly active boot environment. See Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Description 

For More Information 

Use this procedure to activate a boot environment and use a character user interface (CUI). 


Note –

The first time you activate a boot environment, the Activate menu or the luactivate command must be used.


To Activate a Boot Environment (Character User Interface)

Use this procedure to activate a boot environment with the luactivate command.


Note –

The first time you activate a boot environment, the Activate menu or the luactivate command must be used.


To Activate a Boot Environment (Command-Line Interface)

Use this procedure to activate a boot environment and force a synchronization of files.  


Note –

Files are synchronized with the first activation. If you switch boot environments after the first activation, files are not synchronized.


To Activate a Boot Environment and Synchronize Files (Command-Line Interface)

x86: Use this procedure to activate a boot environment with the GRUB menu. 


Note –

A GRUB menu can facilitate switching from one boot environment to another. A boot environment appears in the GRUB menu after the first activation.


x86: To Activate a Boot Environment With the GRUB Menu (Command-Line Interface)

Requirements and Limitations for Activating a Boot Environment

To successfully activate a boot environment, that boot environment must meet the following conditions:

Description 

For More Information 

The boot environment must have a status of “complete.”  

To check status, see Displaying the Status of All Boot Environments.

If the boot environment is not the current boot environment, you cannot have mounted the partitions of that boot environment by using the luumount or mount commands.

To view man pages, see lumount(1M) or mount(1M).

The boot environment that you want to activate cannot be involved in a comparison operation.  

For procedures, see Comparing Boot Environments.

If you want to reconfigure swap, make this change prior to booting the inactive boot environment. By default, all boot environments share the same swap devices.  

To reconfigure swap, see one of the following procedures: 


x86 only –

If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:

See x86: Activating a Boot Environment With the GRUB Menu.


ProcedureTo Activate a Boot Environment (Character User Interface)

The first time you boot from a newly created boot environment, Solaris Live Upgrade software synchronizes the new boot environment with the boot environment that was last active. “Synchronize” means that certain critical system files and directories are copied from the last-active boot environment to the boot environment being booted. Solaris Live Upgrade does not perform this synchronization after this initial boot unless you request to do so when prompted to force a synchronization.

For more information about synchronization, see Synchronizing Files Between Boot Environments.


x86 only –

If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:

See x86: Activating a Boot Environment With the GRUB Menu.


  1. From the Solaris Live Upgrade main menu, select Activate.

  2. Type the name of the boot environment to make active:


    Name of Boot Environment: Solaris_10
    Do you want to force a Live Upgrade sync operations: no
    
  3. You can either continue or force a synchronization of files.

    • Press Return to continue.

      The first time that the boot environment is booted, files are automatically synchronized.

    • You can force a synchronization of files, but use this feature with caution. Operating systems on each boot environment must be compatible with files that are being synchronized. To force a synchronization of files, type:


      Do you want to force a Live Upgrade sync operations: yes
      

      Caution – Caution –

      Use a forced synchronization with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.


  4. Press F3 to begin the activation process.

  5. Press Return to continue.

    The new boot environment is activated at the next reboot.

  6. To activate the inactive boot environment, reboot:


    # init 6
    

ProcedureTo Activate a Boot Environment (Command-Line Interface)

The following procedure switches a new boot environment to become the currently running boot environment.


x86 only –

If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:

See x86: Activating a Boot Environment With the GRUB Menu.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To activate the boot environment, type:


    # /sbin/luactivate  BE_name
    
    BE_name

    Specifies the name of the boot environment that is to be activated

  3. Reboot.


    # init 6
    

    Caution – Caution –

    Use only the init or shutdown commands to reboot. If you use the reboot, halt, or uadmin commands, the system does not switch boot environments. The last-active boot environment is booted again.



Example 5–14 Activating a Boot Environment (Command-Line Interface)

In this example, the second_disk boot environment is activated at the next reboot.


# /sbin/luactivate second_disk
# init 6

ProcedureTo Activate a Boot Environment and Synchronize Files (Command-Line Interface)

The first time you boot from a newly created boot environment, Solaris Live Upgrade software synchronizes the new boot environment with the boot environment that was last active. “Synchronize” means that certain critical system files and directories are copied from the last-active boot environment to the boot environment being booted. Solaris Live Upgrade does not perform this synchronization after the initial boot, unless you force synchronization with the luactivate command and the -s option.


x86 only –

When you switch between boot environments with the GRUB menu, files also are not synchronized. You must use the following procedure to synchronize files.


For more information about synchronization, see Synchronizing Files Between Boot Environments.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To activate the boot environment, type:


    # /sbin/luactivate  -s BE_name
    
    -s

    Forces a synchronization of files between the last-active boot environment and the new boot environment. The first time that a boot environment is activated, the files between the boot environment are synchronized With subsequent activations, the files are not synchronized unless you use the -s option.


    Caution – Caution –

    Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.


    BE_name

    Specifies the name of the boot environment that is to be activated.

  3. Reboot.


    # init 6
    

Example 5–15 Activating a Boot Environment (Command-Line Interface)

In this example, the second_disk boot environment is activated at the next reboot and the files are synchronized.


# /sbin/luactivate -s second_disk
# init 6

x86: Activating a Boot Environment With the GRUB Menu

A GRUB menu provides an optional method of switching between boot environments. The GRUB menu is an alternative to activating (booting) with the luactivate command or the Activate menu. The table below notes cautions and limitations when using the GRUB menu.

Table 5–3 x86: Activating With the GRUB Menu Summary

Task 

Description 

For More Information 

Caution

After you have activated a boot environment, do not change the disk order in the BIOS. Changing the order might cause the GRUB menu to become invalid. If this problem occurs, changing the disk order back to the original state fixes the GRUB menu. 

 

Activating a boot environment for the first time 

The first time you activate a boot environment, you must use the luactivate command or the Activate menu. The next time you boot, that boot environment's name is displayed in the GRUB main menu. You can thereafter switch to this boot environment by selecting the appropriate entry in the GRUB menu.

To Activate a Boot Environment (Command-Line Interface)

Synchronizing files 

The first time you activate a boot environment, files are synchronized between the current boot environment and the new boot environment. With subsequent activations, files are not synchronized. When you switch between boot environments with the GRUB menu, files also are not synchronized. You can force a synchronization when using the luactivate command with the -s option.

To Activate a Boot Environment and Synchronize Files (Command-Line Interface)

Boot environments created before the Solaris 10 1/06 release

If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command or the Activate menu. These older boot environments do not display on the GRUB menu.

To Activate a Boot Environment (Command-Line Interface)

Editing or customizing the GRUB menu entries 

The menu.lst file contains the information that is displayed in the GRUB menu. You can revise this file for the following reasons:

  • To add to the GRUB menu entries for operating systems other than the Solaris OS.

  • To customize booting behavior. For example, you could change booting to verbose mode or change the default time that automatically boots the OS.


Note –

If you want to change the GRUB menu, you need to locate the menu.lst file. For step-by-step instructions, see x86: Locating the GRUB Menu's menu.lst File (Tasks).



Caution – Caution –

Do not use the GRUB menu.lst file to modify Solaris Live Upgrade entries. Modifications could cause Solaris Live Upgrade to fail. Although you can use the menu.lst file to customize booting behavior, the preferred method for customization is to use the eeprom command. If you use the menu.lst file to customize, the Solaris OS entries might be modified during a software upgrade. Changes to the file could be lost.


Procedurex86: To Activate a Boot Environment With the GRUB Menu (Command-Line Interface)

You can switch between two boot environments with the GRUB menu. Note the following limitations:

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Reboot the system.


    # init 6
    

    The GRUB main menu is displayed. The two operating systems are listed, Solaris and second_disk, which is a Solaris Live Upgrade boot environment. The failsafe entries are for recovery, if for some reason the primary OS does not boot.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris  failsafe                                                  |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. To activate a boot environment, use the arrow key to select the desired boot environment and press Return.

    The selected boot environment is booted and becomes the active boot environment.

Chapter 6 Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

This chapter explains how to recover from an activation failure.

If a failure is detected after upgrading or if the application is not compatible with an upgraded component, fall back to the original boot environment by using one of the following procedures, depending on your platform.

SPARC: Falling Back to the Original Boot Environment (Command-Line Interface)

You can fallback to the original boot environment by using three methods:

ProcedureSPARC: To Fall Back Despite Successful New Boot Environment Activation

Use this procedure when you have successfully activated your new boot environment, but are unhappy with the results.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /sbin/luactivate BE_name
    
    BE_name

    Specifies the name of the boot environment to be activated

  3. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

ProcedureSPARC: To Fall Back From a Failed Boot Environment Activation

  1. At the OK prompt, boot the machine to single-user state from the Solaris Operating System DVD, Solaris Software - 1 CD, the network, or a local disk.


    OK boot device_name -s
    
    device_name

    Specifies the name of devices from where the system can boot, for example /dev/dsk/c0t0d0s0

  2. Type:


    # /sbin/luactivate BE_name
    
    BE_name

    Specifies the name of the boot environment to be activated

  3. At the prompt, type:


    Do you want to fallback to activate boot environment <disk name> 
    (yes or no)? yes
    

    A message displays that the fallback activation is successful.

  4. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

ProcedureSPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image

Use this procedure to boot from a DVD, CD, a net installation image or another disk that can be booted. You need to mount the root (/) slice from the last-active boot environment. Then run the luactivate command, which makes the switch. When you reboot, the last-active boot environment is up and running again.

  1. At the OK prompt, boot the machine to single-user state from the Solaris Operating System DVD, Solaris Software - 1 CD, the network, or a local disk:


    OK boot cdrom -s 
    

    or


    OK boot net -s
    

    or


    OK boot device_name -s
    
    device_name

    Specifies the name of the disk and the slice where a copy of the operating system resides, for example /dev/dsk/c0t0d0s0

  2. If necessary, check the integrity of the root (/) file system for the fallback boot environment.


    # fsck device_name
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  3. Mount the active boot environment root (/) slice to some directory, such as /mnt:


    # mount device_name /mnt
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  4. From the active boot environment root (/) slice, type:


    # /mnt/sbin/luactivate
    

    luactivate activates the previous working boot environment and indicates the result.

  5. Unmount /mnt


    # umount  /mnt
    
  6. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

x86: Falling Back to the Original Boot Environment

To fall back to the original boot environment, choose the procedure the best fits your circumstances.

Procedurex86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu

Use this procedure when you have successfully activated your new boot environment, but are dissatisfied with the results. You can quickly switch back to the original boot environment by using the GRUB menu.


Note –

The boot environments that are being switched must be GRUB boot environments that were created with GRUB software. If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment is not a GRUB boot environment.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Reboot the system.


    # init 6
    

    The GRUB menu is displayed. The Solaris OS is the original boot environment. The second_disk boot environment was successfully activated and appears on the GRUB menu. The failsafe entries are for recovery if for some reason the primary entry does not boot.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. To boot to the original boot environment, use the arrow key to select the original boot environment and press Return.


Example 6–1 To Fall Back Despite Successful New Boot Environment Activation


# su
# init 6

GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
+-------------------------------------------------------------------+
|Solaris                                                            |
|Solaris  failsafe                                                  |
|second_disk                                                        |
|second_disk failsafe                                               |
+-------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted. Press
enter to boot the selected OS, 'e' to edit the commands before
booting, or 'c' for a command-line.

Select the original boot environment, Solaris.


Procedurex86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu

If you experience a failure while booting, use the following procedure to fall back to the original boot environment. In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.


Caution – Caution –

For the Solaris 10 3/05 release, the recommended action to fall back if the previous boot environment and new boot environment were on different disks included changing the hard disk boot order in the BIOS. Starting with the Solaris 10 1/06 release, changing the BIOS disk order is unnecessary and is strongly discouraged. Changing the BIOS disk order might invalidate the GRUB menu and cause the boot environment to become unbootable. If the BIOS disk order is changed, reverting the order back to the original settings restores system functionality.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To display the GRUB menu, reboot the system.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to this procedure, x86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu and the DVD or CD.

  4. Boot to single user mode by editing the Grub menu.

    1. To edit the GRUB main menu, type e.

      The GRUB edit menu is displayed.


      root (hd0,2,a)
      kernel /platform/i86pc/multiboot
      module /platform/i86pc/boot_archive
    2. Select the original boot environment's kernel entry by using the arrow keys.

    3. To edit the boot entry, type e.

      The kernel entry is displayed in the GRUB edit menu.


      grub edit>kernel /boot/multiboot
    4. Type -s and press Enter.

      The following example notes the placement of the -s option.


      grub edit>kernel /boot/multiboot -s
      
    5. To begin the booting process in single user mode, type b.

  5. If necessary, check the integrity of the root (/) file system for the fallback boot environment.


    # fsck mount_ point
    
    mount_point

    A root (/) file system that is known and reliable

  6. Mount the original boot environment root slice to some directory (such as /mnt):


    # mount device_name /mnt
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  7. From the active boot environment root slice, type:


    # /mnt/sbin/luactivate
    

    luactivate activates the previous working boot environment and indicates the result.

  8. Unmount /mnt.


    # umount /mnt
    
  9. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

Procedurex86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu and the DVD or CD

If you experience a failure while booting, use the following procedure to fall back to the original boot environment. In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.


Caution – Caution –

For the Solaris 10 3/05 release, the recommended action to fall back if the previous boot environment and new boot environment were on different disks included changing the hard disk boot order in the BIOS. Starting with the Solaris 10 1/06 release, changing the BIOS disk order is unnecessary and is strongly discouraged. Changing the BIOS disk order might invalidate the GRUB menu and cause the boot environment to become unbootable. If the BIOS disk order is changed, reverting the order back to the original settings restores system functionality.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.

  3. Boot from the DVD or CD.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  4. Boot to single user mode by editing the Grub menu.

    1. To edit the GRUB main menu, type e.

      The GRUB edit menu is displayed.


      root (hd0,2,a)
      kernel /platform/i86pc/multiboot
      module /platform/i86pc/boot_archive
    2. Select the original boot environment's kernel entry by using the arrow keys.

    3. To edit the boot entry, type e.

      The kernel entry is displayed in an editor.


      grub edit>kernel /boot/multiboot
    4. Type -s and press Enter.

      The following example notes the placement of the -s option.


      grub edit>kernel /boot/multiboot -s
      
    5. To begin the booting process in single user mode, type b.

  5. If necessary, check the integrity of the root (/) file system for the fallback boot environment.


    # fsck mount_ point
    
    mount_point

    A root (/) file system that is known and reliable

  6. Mount the original boot environment root slice to some directory (such as /mnt):


    # mount device_name /mnt
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  7. From the active boot environment root slice, type:


    # /mnt/sbin/luactivate
    Do you want to fallback to activate boot environment c0t4d0s0
    (yes or no)? yes
    

    luactivate activates the previous working boot environment and indicates the result.

  8. Unmount /mnt.


    # umount device_name
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  9. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

Chapter 7 Maintaining Solaris Live Upgrade Boot Environments (Tasks)

This chapter explains various maintenance tasks such as keeping a boot environment file system up to date or deleting a boot environment. This chapter contains the following sections:

Overview of Solaris Live Upgrade Maintenance

Table 7–1 Overview of Solaris Live Upgrade Maintenance

Task  

Description 

For Instructions 

(Optional) View Status. 

  • View whether a boot environment is active, being activated, scheduled to be activated, or in the midst of a comparison.

 
  • Compare the active and inactive boot environments.

 
  • Display the name of the active boot environment.

 
  • View the configurations of a boot environment.

(Optional) Update an inactive boot environment. 

Copy file systems from the active boot environment again without changing the configuration of file systems. 

Updating a Previously Configured Boot Environment

(Optional) Other tasks. 

  • Delete a boot environment.

 
  • Change the name of a boot environment.

 
  • Add or change a description that is associated with a boot environment name.

 
  • Cancel scheduled jobs.

Displaying the Status of All Boot Environments

Use the Status menu or the lustatus command to display the information about the boot environment. If no boot environment is specified, the status information for all boot environments on the system is displayed.

The following details for each boot environment are displayed:

ProcedureTo Display the Status of All Boot Environments (Character User Interface)

    From the main menu, select Status.

    A table that is similar to the following is displayed:


    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    disk_a_S9          yes       yes     yes        no       -     
    disk_b_S10database  yes       no      no         yes      COPYING  
    disk_b_S9a         no        no      no         yes      - 

    Note –

    In this example, you could not perform copy, rename, or upgrade operations on disk_b_S9a because it is not complete, nor on disk_b_S10database, because a live upgrade operation is in progress.


ProcedureTo Display the Status of All Boot Environments (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lustatus BE_name
    
    BE_name

    Specifies the name of the inactive boot environment to view status. If BE_name is omitted, lustatus displays status for all boot environments in the system.

    In this example, the status for all boot environments is displayed.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    disk_a_S9           yes       yes     yes        no       -    
    disk_b_S10database   yes       no      no         yes      COPYING  
    disk_b_S9a          no        no      no         yes      - 

    Note –

    You could not perform copy, rename, or upgrade operations on disk_b_S9a because it is not complete, nor on disk_b_S10database because a live upgrade operation is in progress.


Updating a Previously Configured Boot Environment

You can update the contents of a previously configured boot environment with the Copy menu or the lumake command. File Systems from the active (source) boot environment are copied to the target boot environment. The data on the target is also destroyed. A boot environment must have the status “complete” before you can copy from it. See Displaying the Status of All Boot Environments to determine a boot environment's status.

The copy job can be scheduled for a later time, and only one job can be scheduled at a time. To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.

ProcedureTo Update a Previously Configured Boot Environment (Character User Interface)

  1. From the main menu, select Copy.

  2. Type the name of the inactive boot environment to update:


    Name of Target Boot Environment: solaris8
    
  3. Continue or schedule the copy to occur later:

    • To continue with the copy, press Return.

      The inactive boot environment is updated.

    • To schedule the copy for later, type y, a time (by using the at command format), and the email address to which to send the results:


      Do you want to schedule the copy? y
      Enter the time in 'at' format to schedule copy: 8:15 PM
      Enter the address to which the copy log should be mailed: 
      someone@anywhere.com

      For information about time formats, see the at(1) man page.

      The inactive boot environment is updated.

      To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.

ProcedureTo Update a Previously Configured Boot Environment (Command-Line Interface)

This procedure copies source files over outdated files on a boot environment that was previously created.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lumake -n  BE_name [-s source_BE] [-t  time] [-m email_address]
    
    -n BE_name

    Specifies the name of the boot environment that has file systems that are to be replaced.

    -s source_BE

    (Optional) Specifies the name of the source boot environment that contains the file systems to be copied to the target boot environment. If you omit this option, lumake uses the current boot environment as the source.

    -t time

    (Optional) Set up a batch job to copy over file systems on a specified boot environment at a specified time. The time is given in the format that is specified by the man page, at(1).

    -m email_address

    (Optional) Enables you to send an email of the lumake output to a specified address on command completion. email_address is not checked. You can use this option only in conjunction with -t.


Example 7–1 Updating a Previously Configured Boot Environment (Command-Line Interface)

In this example, file systems from first_disk are copied to second_disk. When the job is completed, an email is sent to Joe at anywhere.com.


# lumake -n  second_disk -s first_disk -m joe@anywhere.com

The files on first_disk are copied to second_disk and email is sent for notification. To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.


Canceling a Scheduled Create, Upgrade, or Copy Job

A boot environment's scheduled creation, upgrade, or copy job can be canceled just prior to the time the job starts. A job can be scheduled for a specific time either in the GUI with the Create a Boot Environment, Upgrade a Boot Environment, or Copy a Boot Environment menus. In the CLI, the job can be scheduled by the lumake command. At any time, only one job can be scheduled on a system.

ProcedureTo Cancel a Scheduled Create, Upgrade, or Copy Job (Character User Interface)

  1. From the main menu, select Cancel.

  2. To view a list of boot environments that is available for canceling, press F2.

  3. Select the boot environment to cancel.

    The job no longer executes at the time specified.

ProcedureTo Cancel a Scheduled Create, Upgrade, or Copy Job (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucancel
    

    The job no longer executes at the time that is specified.

Comparing Boot Environments

Use the Compare menu or lucompare to check for differences between the active boot environment and other boot environments. To make a comparison, the inactive boot environment must be in a complete state and cannot have a copy job that is pending. See Displaying the Status of All Boot Environments.

The specified boot environment cannot have any partitions that are mounted with lumount or mount.

ProcedureTo Compare Boot Environments (Character User Interface)

  1. From the main menu, select Compare.

  2. Select either Compare to Original or Compare to an Active Boot Environment.

  3. Press F3.

  4. Type the names of the original (active) boot environment, the inactive boot environment, and the path to a file:


    Name of Parent: solaris8
    Name of Child: solaris8-1
    Full Pathname of the file to Store Output: /tmp/compare
    
  5. To save to the file, press F3.

    The Compare menu displays the following file attributes:

    • Mode.

    • Number of links.

    • Owner.

    • Group.

    • Checksum – Computes checksums only if the file in the specified boot environment matches its counterpart on the active boot environment in all of the fields that are described previously. If everything matches but the checksums differ, the differing checksums are appended to the entries for the compared files.

    • Size.

    • Existence of files in only one boot environment.

  6. To return to the Compare menu, press F3.

ProcedureTo Compare Boot Environments (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/lucompare -i  infile (or) -t -o  outfile BE_name
    
    -i  infile

    Compare files that are listed in infile. The files to be compared should have absolute file names. If the entry in the file is a directory, then comparison is recursive to the directory. Use either this option or -t, not both.

    -t

    Compare only nonbinary files. This comparison uses the file(1) command on each file to determine if the file is a text file. Use either this option or -i, not both.

    -o  outfile

    Redirect the output of differences to outfile.

    BE_name

    Specifies the name of the boot environment that is compared to the active boot environment.


Example 7–2 Comparing Boot Environments (Command-Line Interface)

In this example, first_disk boot environment (source) is compared to second_disk boot environment and the results are sent to a file.


# /usr/sbin/lucompare -i  /etc/lu/compare/ \
-o /var/tmp/compare.out second_disk

Deleting an Inactive Boot Environment

Use either the Delete menu or the ludelete command to remove a boot environment. Note the following limitations.

ProcedureTo Delete an Inactive Boot Environment (Character User Interface)

  1. From the main menu, select Delete.

  2. Type the name of the inactive boot environment you want to delete:


    Name of boot environment: solaris8
    

    The inactive boot environment is deleted.

ProcedureTo Delete an Inactive Boot Environment (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # ludelete BE_name
    
    BE_name

    Specifies the name of the inactive boot environment that is to be deleted


Example 7–3 Deleting an Inactive Boot Environment (Command-Line Interface)

In this example, the boot environment, second_disk, is deleted.


# ludelete second_disk

Displaying the Name of the Active Boot Environment

Use the Current menu or the lucurr command to display the name of the currently running boot environment. If no boot environments are configured on the system, the message “No Boot Environments are defined” is displayed. Note that lucurr reports only the name of the current boot environment, not the boot environment that is active on the next reboot. See Displaying the Status of All Boot Environments to determine a boot environment's status.

ProcedureTo Display the Name of the Active Boot Environment (Character User Interface)

    From the main menu, select Current.

    The active boot environment's name or the message “No Boot Environments are defined” is displayed.

ProcedureTo Display the Name of the Active Boot Environment (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/lucurr
    

Example 7–4 Displaying the Name of the Active Boot Environment (Command-Line Interface)

In this example, the name of the current boot environment is displayed.


# /usr/sbin/lucurr
solaris8

Changing the Name of a Boot Environment

Renaming a boot environment is often useful when you upgrade the boot environment from one Solaris release to another release. For example, following an operating system upgrade, you might rename the boot environment solaris8 to solaris10.

Use the Rename menu or lurename command to change the inactive boot environment's name.


x86 only –

Starting with the Solaris 10 1/06 release, the GRUB menu is automatically updated when you use the Rename menu or lurename command. The updated GRUB menu displays the boot environment's name in the list of boot entries. For more information about the GRUB menu, see x86: Activating a Boot Environment With the GRUB Menu.

To determine the location of the GRUB menu's menu.lst file, see x86: Locating the GRUB Menu's menu.lst File (Tasks).


Table 7–2 Limitations for Naming a Boot Environment

Limitation 

For Instructions 

The name must not exceed 30 characters in length. 

 

The name can consist only of alphanumeric characters and other ASCII characters that are not special to the UNIX shell. 

See the “Quoting” section of sh(1).

The name can contain only single-byte, 8-bit characters. 

 

The name must be unique on the system. 

 

A boot environment must have the status “complete” before you rename it.  

See Displaying the Status of All Boot Environments to determine a boot environment's status.

You cannot rename a boot environment that has file systems mounted with lumount or mount.

 

ProcedureTo Change the Name of an Inactive Boot Environment (Character User Interface)

  1. From the main menu, select Rename.

  2. Type the boot environment to rename and then the new name.

  3. To save your changes, press F3.

ProcedureTo Change the Name of an Inactive Boot Environment (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lurename -e  BE_name -n  new_name
    
    -e BE_name

    Specifies the inactive boot environment name to be changed

    -n new_name

    Specifies the new name of the inactive boot environment

    In this example, second_disk is renamed to third_disk.


    # lurename -e  second_disk  -n  third_disk
    

Adding or Changing a Description Associated With a Boot Environment Name

You can associate a description with a boot environment name. The description never replaces the name. Although a boot environment name is restricted in length and characters, the description can be of any length and of any content. The description can be simple text or as complex as a gif file. You can create this description at these times:

For more information about using the -A option with lucreate

To Create a Boot Environment for the First Time (Command-Line Interface)

For more information about creating the description after the boot environment has been created 

ludesc(1M)

ProcedureTo Add or Change a Description for a Boot Environment Name With Text

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -n  BE_name 'BE_description'
    
    -n BE_name 'BE_description'

    Specifies the boot environment name and the new description to be associated with the name


Example 7–5 Adding a Description to a Boot Environment Name With Text

In this example, a boot environment description is added to a boot environment that is named second_disk. The description is text that is enclosed in single quotes.


# /usr/sbin/ludesc -n second_disk 'Solaris 10 11/06 test build'

ProcedureTo Add or Change a Description for a Boot Environment Name With a File

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -n BE_name -f file_name
    
    -n BE_name

    Specifies the boot environment name

    file_name

    Specifies the file to be associated with a boot environment name


Example 7–6 Adding a Description to a Boot Environment Name With a File

In this example, a boot environment description is added to a boot environment that is named second_disk. The description is contained in a gif file.


# /usr/sbin/ludesc -n second_disk -f rose.gif

ProcedureTo Determine a Boot Environment Name From a Text Description

The following command returns the name of the boot environment associated with the specified description.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -A 'BE_description'
    
    -A 'BE_description'

    Specifies the description to be associated with the boot environment name.


Example 7–7 Determining a Boot Environment Name From a Description

In this example, the name of the boot environment, second_disk, is determined by using the -A option with the description.


# /usr/sbin/ludesc -A  'Solaris 10 11/06 test build'
 second_disk

ProcedureTo Determine a Boot Environment Name From a Description in a File

The following command displays the boot environment's name that is associated with a file. The file contains the description of the boot environment.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -f  file_name
    
    -f file_name

    Specifies the name of the file that contains the description of the boot environment.


Example 7–8 Determining a Boot Environment Name From a Description in a File

In this example, the name of the boot environment, second_disk, is determined by using the -f option and the name of the file that contains the description.


# /usr/sbin/ludesc -f rose.gif
second_disk

ProcedureTo Determine a Boot Environment Description From a Name

This procedure displays the description of the boot environment that is named in the command.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -n BE_name
    
    -n BE_name

    Specifies the boot environment name.


Example 7–9 Determining a Boot Environment Description From a Name

In this example, the description is determined by using the -n option with the boot environment name.


# /usr/sbin/ludesc -n  second_disk 
Solaris 10 11/06 test build

Viewing the Configuration of a Boot Environment

Use the List menu or the lufslist command to list the configuration of a boot environment. The output contains the disk slice (file system), file system type, and file system size for each boot environment mount point.

ProcedureTo View the Configuration of Each Inactive Boot Environment (Character User Interface)

  1. From the main menu, select List.

  2. To view the status of a boot environment, type the name.


    Name of Boot Environment: solaris8
    
  3. Press F3.

    The following example displays a list.


    Filesystem                fstype       size(Mb) Mounted on
    ------------------------------------------------------------------
    /dev/dsk/c0t0d0s1         swap           512.11 -
    /dev/dsk/c0t4d0s3         ufs           3738.29 /
    /dev/dsk/c0t4d0s4         ufs            510.24 /opt
  4. To return to the List menu, press F6.

ProcedureTo View the Configuration of a Boot Environment (Command-Line Interface)

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lufslist -n BE_name
    
    BE_name

    Specifies the name of the boot environment to view file system specifics

    The following example displays a list.


    Filesystem                fstype       size(Mb) Mounted on
    ------------------------------------------------------------------
    /dev/dsk/c0t0d0s1         swap           512.11 -
    /dev/dsk/c0t4d0s3         ufs           3738.29 /
    /dev/dsk/c0t4d0s4         ufs            510.24 /opt

Chapter 8 x86: Locating the GRUB Menu's menu.lst File (Tasks)

This chapter describes updating the GRUB menu.lst file if you want to manually update the file. For example, you might want to change the default time for how fast the default OS is booted. Or, you might want to add another OS to the GRUB menu. This chapter provides several examples for finding the menu.lst file.

For background information on GRUB based booting, see Chapter 6, GRUB Based Booting for Solaris Installation, in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.

x86: Locating the GRUB Menu's menu.lst File (Tasks)

You must always use the bootadm command to locate the GRUB menu's menu.lst file. The list-menu subcommand finds the active GRUB menu. The menu.lst file lists all the operating systems that are installed on a system. The contents of this file dictate the list of operating systems that is displayed on the GRUB menu.

Typically, the active GRUB menu's menu.lst file is located at /boot/grub/menu.lst. In some situations, the GRUB menu.lst file resides elsewhere. For example, in a system that uses Solaris Live Upgrade, the GRUB menu.lst file might be on a boot environment that is not the currently running boot environment. Or if you have upgraded a system with an x86 boot partition, the menu.lst file might reside in the /stubboot directory. Only the active GRUB menu.lst file is used to boot the system. In order to modify the GRUB menu that is displayed when you boot the system, the active GRUB menu.lst file must be modified. Changing any other GRUB menu.lst file has no effect on the menu that is displayed when you boot the system. To determine the location of the active GRUB menu.lst file, use the bootadm command. The list-menu subcommand displays the location of the active GRUB menu. The following procedures determine the location of the GRUB menu's menu.lst file.

For more information about the bootadm command, see bootadm(1M) man page.

ProcedureLocating the GRUB Menu's menu.lst file

In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. The Solaris OS has been booted and contains the GRUB menu.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To locate the menu.lst file, type:


    # /sbin/bootadm list-menu
    

    The location and contents of the file are displayed.


    The location for the active GRUB menu is: /boot/grub/menu.lst
    default 0
    timeout 10
    0 Solaris
    1 Solaris failsafe
    2 second_disk
    3 second_disk failsafe

ProcedureLocating the GRUB Menu's menu.lst File When the active menu.lst file is in Another Boot Environment

In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. In this example, the menu.lst file does not exist in the currently running boot environment. The second_disk boot environment has been booted. The Solaris boot environment contains the GRUB menu. The Solaris boot environment is not mounted.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To locate the menu.lst file, type:


    # /sbin/bootadm list-menu
    

    The location and contents of the file are displayed.


    The location for the active GRUB menu is: /dev/dsk/device_name(not mounted)
    The filesystem type of the menu device is <ufs>
    default 0
    timeout 10
    0 Solaris
    1 Solaris failsafe
    2 second_disk
    3 second_disk failsafe
  3. Because the file system containing the menu.lst file is not mounted, mount the file system. Specify the UFS file system and the device name.


    # /usr/sbin/mount -F ufs /dev/dsk/device_name /mnt
    

    Where device_name specifies the location of the root (/) file system on the disk device of the boot environment that you want to mount. The device name is entered in the form of /dev/dsk/cwtxdysz. For example:


    # /usr/sbin/mount -F ufs /dev/dsk/c0t1d0s0 /mnt
    

    You can access the GRUB menu at /mnt/boot/grub/menu.lst

  4. Unmount the filesystem


    # /usr/sbin/umount /mnt
    

    Note –

    If you mount a boot environment or a file system of a boot environment, ensure that the file system or file systems are unmounted after use. If these file systems are not unmounted, future Solaris Live Upgrade operations on that boot environment might fail.


ProcedureLocating the GRUB Menu's menu.lst File When a Solaris Live Upgrade Boot Environment is Mounted

In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. The second_disk boot environment has been booted. The Solaris boot environment contains the GRUB menu. The Solaris boot environment is mounted at /.alt.Solaris.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To locate the menu.lst file, type:


    # /sbin/bootadm list-menu
    

    The location and contents of the file are displayed.


    The location for the active GRUB menu is:
    /.alt.Solaris/boot/grub/menu.lst
    default 0
    timeout 10
    0 Solaris
    1 Solaris failsafe
    2 second_disk
    3 second_disk failsafe

    Since the boot environment containing the GRUB menu is already mounted, then you can access the menu.lst file at /.alt.Solaris/boot/grub/menu.lst.

ProcedureLocating the GRUB Menu's menu.lst File When Your System Has an x86 Boot Partition

In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. The second_disk boot environment has been booted. Your system has been upgraded and an x86 boot partition remains. The boot partition is mounted at /stubboot and contains the GRUB menu. For an explanation of x86 boot partitions, see Partitioning Recommendations in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To locate the menu.lst file, type:


    # /sbin/bootadm list-menu
    

    The location and contents of the file are displayed.


    The location for the active GRUB menu is:
    /stubboot/boot/grub/menu.lst
    default 0
    timeout 10
    0 Solaris
    1 Solaris failsafe
    2 second_disk
    3 second_disk failsafe

    You can access the menu.lst file at /stubboot/boot/grub/menu.lst.

Chapter 9 Solaris Live Upgrade (Examples)

This chapter provides examples of creating a boot environment, then upgrading and activating the new boot environment which then becomes the currently running system. This chapter contains the following sections:

Example of Upgrading With Solaris Live Upgrade (Command-Line Interface)

In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 11/06 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command. An example of falling back to the original boot environment is also given.

To Install Required Patches

Description 

For More Information 


Caution – Caution –

Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.



x86 only –

Starting with the Solaris 10 1/06 release, if this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not 
executable: </sbin/biosdev>.
ERROR: One or more patches required by 
Live Upgrade has not been installed.

The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site.

If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available at http://sunsolve.sun.com.

Follow these steps to install the required patches.

From the SunSolve web site, obtain the list of patches.


# patchadd /net/server/export/patches
# init 6

To Install Solaris Live Upgrade on the Active Boot Environment


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Insert the Solaris Operating System DVD or Solaris Software - 2 CD.

  2. Follow the step for the media you are using.

    • If you are using the Solaris Operating System DVD, change the directory to the installer and run the installer.

      • For SPARC based systems:


        # cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers
        # ./liveupgrade20
        
      • For x86 based systems:


        # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
        # ./liveupgrade20
        

      The Solaris installation program GUI is displayed.

    • If you are using the Solaris Software - 2 CD, run the installer.


      % ./installer
      

      The Solaris installation program GUI is displayed.

  3. From the Select Type of Install panel, click Custom.

  4. On the Locale Selection panel, click the language to be installed.

  5. Choose the software to install.

    • For DVD, on the Component Selection panel, click Next to install the packages.

    • For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click the other product choices to deselect this software.

  6. Follow the directions on the Solaris installation program panels to install the software.

To Create a Boot Environment

The source boot environment is named c0t4d0s0 by using the -c option. Naming the source boot environment is required only when the first boot environment is created. For more information about naming using the -c option, see the description in “To Create a Boot Environment for the First Time” Step 2.

The new boot environment is named c0t15d0s0. The -A option creates a description that is associated with the boot environment name.

The root (/) file system is copied to the new boot environment. Also, a new swap slice is created rather than sharing the source boot environment's swap slice.


# lucreate -A 'BE_description' -c /dev/dsk/c0t4d0s0 -m /:/dev/dsk/c0t15d0s0:ufs\
-m -:/dev/dsk/c0t15d0s1:swap -n /dev/dsk/c0t15d0s0

To Upgrade the Inactive Boot Environment

The inactive boot environment is named c0t15d0s0. The operating system image to be used for the upgrade is taken from the network.


# luupgrade -n c0t15d0s0 -u -s /net/ins-svr/export/Solaris_10 \ 
combined.solaris_wos

To Check if Boot Environment Is Bootable

The lustatus command reports if the boot environment creation is complete. lustatus also shows if the boot environment is bootable.


# lustatus
boot environment   Is        Active  Active     Can	    Copy
Name               Complete  Now	 OnReboot   Delete	 Status
------------------------------------------------------------------------
c0t4d0s0           yes       yes      yes      no      -
c0t15d0s0          yes       no       no       yes     -

To Activate the Inactive Boot Environment

The c0t15d0s0 boot environment is made bootable with the luactivate command. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.


# luactivate c0t15d0s0
# init 6

(Optional) To Fall Back to the Source Boot Environment

The following procedures for falling back depend on your new boot environment activation situation:


Example 9–1 SPARC: To Fall Back Despite Successful Boot Environment Creation

In this example, the original c0t4d0s0 boot environment is reinstated as the active boot environment although it was activated successfully. The device name is first_disk.


# /sbin/luactivate first_disk 
# init 6


Example 9–2 SPARC: To Fall Back From a Failed Boot Environment Activation

In this example, the new boot environment was not bootable. You must return to the OK prompt before booting from the original boot environment, c0t4d0s0, in single-user mode.


OK boot net -s
# /sbin/luactivate first_disk
Do you want to fallback to activate boot environment c0t4d0s0 
(yes or no)? yes
# init 6

The original boot environment, c0t4d0s0, becomes the active boot environment.



Example 9–3 SPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image

In this example, the new boot environment was not bootable. You cannot boot from the original boot environment and must use media or a net installation image. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.


OK boot net -s
# fsck /dev/dsk/c0t4d0s0
# mount /dev/dsk/c0t4d0s0 /mnt 
# /mnt/sbin/luactivate
Do you want to fallback to activate boot environment c0t4d0s0 
(yes or no)? yes
# umount /mnt 
# init 6


Example 9–4 x86: To Fall Back to the Original Boot Environment By Using the GRUB Menu

Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the GRUB menu.

In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. To enable a fallback, the original boot environment is booted in single-user mode.

  1. Become superuser or assume an equivalent role.

  2. To display the GRUB menu, reboot the system.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to Example 9–5.

  4. Edit the GRUB menu by typing: e.

  5. Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.


    grub edit>kernel /boot/multiboot
  6. Boot to single user mode, by typing -s.


    grub edit>kernel /boot/multiboot -s
    
  7. Boot and mount the boot environment. Then activate it.


# b
# fsck /dev/dsk/c0t4d0s0
# mount /dev/dsk/c0t4d0s0 /mnt 
# /mnt/sbin/luactivate
Do you want to fallback to activate boot environment c0t4d0s0
(yes or no)? yes
# umount /mnt
# init 6


Example 9–5 x86: To Fall Back to the Original Boot Environment With the GRUB Menu by Using the DVD or CD

Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the DVD or CD.

In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. To enable a fallback, the original boot environment is booted in single-user mode.

  1. Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.

  2. Become superuser or assume an equivalent role.

  3. Boot from the DVD or CD.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  4. Edit the GRUB menu by typing: e.

  5. Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.


    grub edit>kernel /boot/multiboot
  6. Boot to single user mode, by typing -s.


    grub edit>kernel /boot/multiboot -s
    
  7. Boot and mount the boot environment. Then activate and reboot.


Edit the GRUB menu by typing: e
Select the original boot environment by using the arrow keys.
grub edit>kernel /boot/multiboot -s
# b
# fsck /dev/dsk/c0t4d0s0
# mount /dev/dsk/c0t4d0s0 /mnt 
# /mnt/sbin/luactivate
Do you want to fallback to activate boot environment c0t4d0s0
(yes or no)? yes
# umount /mnt
# init 6

Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (Command-Line Interface)

This example shows you how to do the following tasks:

Figure 9–1 shows the current boot environment, which contains three physical disks.

Figure 9–1 Detaching and Upgrading One Side of a RAID-1 Volume (Mirror)

The context describes the illustration.

  1. Create a new boot environment, second_disk, that contains a mirror.

    The following command performs these tasks.

    • lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system, which is copied to the mirror d10. All data on the mirror d10 is overwritten.

    • Two slices, c0t1d0s0 and c0t2d0s0, are specified to be used as submirrors. These two submirrors are attached to mirror d10.


    # lucreate -c first_disk -n second_disk \ 
    -m /:/dev/md/dsk/d10:ufs,mirror \ 
    -m /:/dev/dsk/c0t1d0s0:attach \ 
    -m /:/dev/dsk/c0t2d0s0:attach
    
  2. Activate the second_disk boot environment.


    # /sbin/luactivate second_disk
    # init 6
    
  3. Create another boot environment, third_disk.

    The following command performs these tasks.

    • lucreate configures a UFS file system for the mount point root (/). A mirror, d20, is created.

    • Slice c0t1d0s0 is removed from its current mirror and is added to mirror d20. The contents of the submirror, the root (/) file system, are preserved and no copy occurs.


    # lucreate -n third_disk \ 
    -m /:/dev/md/dsk/d20:ufs,mirror \ 
    -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve
    
  4. Upgrade the new boot environment, third_disk


    # luupgrade -u -n third_disk \ 
    -s /net/installmachine/export/Solaris_10/OS_image
    
  5. Add a patch to the upgraded boot environment.


    # luupgrade -t n third_disk -s /net/patches 222222-01
    
  6. Activate the third_disk boot environment to make this boot environment the currently running system.


    # /sbin/luactivate third_disk
    # init 6
    
  7. Delete the boot environment second_disk.


    # ludelete second_disk
    
  8. The following commands perform these tasks.

    • Clear mirror d10.

    • Check for the number for the concatenation of c0t2d0s0.

    • Attach the concatenation that is found by the metastat command to the mirror d20. The metattach command synchronizes the newly attached concatenation with the concatenation in mirror d20. All data on the concatenation is overwritten.


    # metaclear d10 
    # metastat -p | grep c0t2d0s0
    dnum 1 1 c0t2d0s0
    # metattach d20 dnum
    
    num

    Is the number found in the metastat command for the concatenation

The new boot environment, third_disk, has been upgraded and is the currently running system. third_disk contains the root (/) file system that is mirrored.

Figure 9–2 shows the entire process of detaching a mirror and upgrading the mirror by using the commands in the preceding example.

Figure 9–2 Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (continued)

The context describes the illustration.

Example of Migrating From an Existing Volume to a Solaris Volume Manager RAID-1 Volume (Command-Line Interface)

Solaris Live Upgrade enables the creation of a new boot environment on RAID–1 volumes (mirrors). The current boot environment's file systems can be on any of the following:

However, the new boot environment's target must be a Solaris Volume Manager RAID-1 volume. For example, the slice that is designated for a copy of the root (/) file system must be /dev/vx/dsk/rootvol. rootvol is the volume that contains the root (/) file system.

In this example, the current boot environment contains the root (/) file system on a volume that is not a Solaris Volume Manager volume. The new boot environment is created with the root (/) file system on the Solaris Volume Manager RAID-1 volume c0t2d0s0. The lucreate command migrates the current volume to the Solaris Volume Manager volume. The name of the new boot environment is svm_be. The lustatus command reports if the new boot environment is ready to be activated and be rebooted. The new boot environment is activated to become the current boot environment.


# lucreate -n svm_be -m /:/dev/md/dsk/d1:mirror,ufs \  
-m /:/dev/dsk/c0t2d0s0:attach
# lustatus
# luactivate svm_be
# lustatus
# init 6

Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive (Command-Line Interface)

The following procedures cover the three-step process:

The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When you use the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices. The boot environment is then activated.

To Create an Empty Boot Environment

In this first step, an empty boot environment is created. Slices are reserved for the file systems that are specified, but no copy of file systems from the current boot environment occurs. The new boot environment is named second_disk.


# lucreate  -s - -m /:/dev/dsk/c0t1d0s0:ufs \  
-n second_disk

The boot environment is ready to be populated with a Solaris Flash archive.

Figure 9–3 shows the creation of an empty boot environment.

Figure 9–3 Creating an Empty Boot Environment

The context describes the illustration.

To Install a Solaris Flash Archive on the New Boot Environment

In this second step, an archive is installed on the second_disk boot environment that was created in the previous example. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris 10 11/06 releases. The archive is named Solaris_10.flar.


# luupgrade -f -n second_disk \
-s /net/installmachine/export/Solaris_10/OS_image \ 
-a /net/server/archive/10.flar 

The boot environment is ready to be activated.

To Activate the New Boot Environment

In this last step, the second_disk boot environment is made bootable with the luactivate command. The system is then rebooted and second_disk becomes the active boot environment.


# luactivate second_disk
# init 6

Example of Upgrading Using Solaris Live Upgrade (Character User Interface)

In this example, a new boot environment is created on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 6/06 release. The upgraded boot environment is then activated.

To Install Solaris Live Upgrade on the Active Boot Environment

  1. Insert the Solaris Operating System DVD or Solaris Software - 2 CD.

  2. Run the installer for the media you are using.

    • If you are using the Solaris Operating System DVD, change directories to the installer and run the installer.

      • For SPARC based systems:


        # cd /cdrom/cdrom0/S0/Solaris_10/Tools/Installers
        # ./liveupgrade20
        

        The Solaris installation program GUI is displayed.

      • For x86 based systems:


        # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
        # ./liveupgrade20
        

        The Solaris installation program GUI is displayed.

    • If you are using the Solaris Software - 2 CD, run the installer.


      % ./installer
      

      The Solaris installation program GUI is displayed.

  3. From the Select Type of Install panel, click Custom.

  4. On the Locale Selection panel, click the language to be installed.

  5. Choose the software to install.

    • For DVD, on the Component Selection panel, click Next to install the packages.

    • For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click the other product choices to deselect the software.

  6. Follow the directions on the Solaris installation program panels to install the software.

To Install Required Patches

Description 

For More Information 


Caution – Caution –

Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.



x86 only –

Starting with the Solaris 10 1/06 release, if this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not executable: 
</sbin/biosdev>.
ERROR: One or more patches required by 
Live Upgrade has not been installed.

The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 at the SunSolve web site.

If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com.

Follow these steps to install the required patches.

From the SunSolve web site, obtain the list of patches.


# patchadd /net/server/export/patches
# init 6

To Create a Boot Environment

In this example, the source boot environment is named c0t4d0s0. The root (/) file system is copied to the new boot environment. Also, a new swap slice is created instead of sharing the source boot environment's swap slice.

  1. Become superuser or assume an equivalent role.

  2. Display the character user interface:


    # /usr/sbin/lu
    

    The Solaris Live Upgrade Main Menu is displayed.

  3. From the main menu, select Create.


    Name of Current Boot Environment:    c0t4d0s0
    Name of New Boot Environment:   c0t15d0s0 
    
  4. Press F3.

    The Configuration menu is displayed.

  5. To select a slice from the configuration menu, press F2.

    The Choices menu is displayed.

  6. Choose slice 0 from disk c0t15d0 for the root (/) file system.

  7. From the configuration menu, create a new slice for swap on c0t15d0 by selecting a swap slice to be split.

  8. To select a slice for swap, press F2. The Choices menu is displayed.

  9. Select slice 1 from disk c0t15d0 for the new swap slice.

  10. Press F3 to create the new boot environment.

To Upgrade the Inactive Boot Environment

The new boot environment is then upgraded. The new version of the operating system for the upgrade is taken from a network image.

  1. From the main menu, select Upgrade.


    Name of New Boot Environment:   c0t15d0s0 
    Package Media: /net/ins3-svr/export/Solaris_10/combined.solaris_wos
  2. Press F3.

To Activate the Inactive Boot Environment

The c0t15d0s0 boot environment is made bootable. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.

  1. From the main menu, select Activate.


    Name of Boot Environment: c0t15d0s0
    Do you want to force a Live Upgrade sync operations: no
    
  2. Press F3.

  3. Press Return.

  4. Type:


    # init 6
    

If a fallback is necessary, use the command-line procedures in the previous example: (Optional) To Fall Back to the Source Boot Environment.

Chapter 10 Solaris Live Upgrade (Command Reference)

The following list shows commands that you can type at the command line. The Solaris Live Upgrade includes man pages for all the listed command-line utilities.

Solaris Live Upgrade Command-Line Options

Task 

Command 

Activate an inactive boot environment. 

luactivate(1M)

Cancel a scheduled copy or create job. 

lucancel(1M)

Compare an active boot environment with an inactive boot environment. 

lucompare(1M)

Recopy file systems to update an inactive boot environment. 

lumake(1M)

Create a boot environment. 

lucreate(1M)

Name the active boot environment. 

lucurr(1M)

Delete a boot environment. 

ludelete(1M)

Add a description to a boot environment name. 

ludesc(1M)

List critical file systems for each boot environment. 

lufslist(1M)

Enable a mount of all of the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive. 

lumount(1M)

Rename a boot environment. 

lurename(1M)

List status of all boot environments. 

lustatus(1M)

Enable an unmount of all the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive. 

luumount(1M)

Upgrade an OS or install a flash archive on an inactive boot environment. 

luupgrade(1M)