Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Part I Upgrading With Solaris Live Upgrade

This part provides an overview and instructions for using Solaris Live Upgrade to create and upgrade an inactive boot environment. The boot environment can then be switched to become the current boot environment. This part in written for a system with a UFS root (/) file system. However, many commands can be used for the ZFS file system.

Chapter 1 Where to Find Solaris Installation Planning Information

This book provides information on how to use the Solaris Live Upgrade program to upgrade the Solaris operating system. This book provides all you need to know about using Solaris Live Upgrade, but a planning book in our collection of installation documentation might be useful to read before you begin. The following references provide useful information before you upgrade your system.

Where to Find Planning and System Requirement Information

The Solaris 10 5/09 Installation Guide: Planning For Installation and Upgrade provides system requirements and high-level planning information, such as planning guidelines for file systems, and upgrade planning and much more. The following list describes the chapters in the planning book and provides links to those chapters.

Chapter Descriptions From the Planning Guide 

Reference 

This chapter describes new features in the Solaris installation programs. 

Chapter 2, What’s New in Solaris Installation, in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

This chapter provides you with information about decisions you need to make before you install or upgrade the Solaris OS. Examples are deciding when to use a network installation image or DVD media and descriptions of all the Solaris installation programs. 

Chapter 3, Solaris Installation and Upgrade (Roadmap), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

This chapter describes system requirements to install or upgrade to the Solaris OS. General guidelines for planning the disk space and default swap space allocation are also provided. Upgrade limitations are also described. 

Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

This chapter contains checklists to help you gather all of the information that you need to install or upgrade your system. This information is useful, for example, if you are performing an interactive installation. You'll have all the information in the checklist that you'll need to do an interactive installation. 

Chapter 5, Gathering Information Before Installation or Upgrade (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

These chapters provide overviews of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements related to these technologies are also included. These chapters include information about ZFS installations, booting, Solaris Zones partitioning technology, and RAID-1 volumes that can be created at installation. 

Part II, Understanding Installations That Relate to ZFS, Booting, Solaris Zones, and RAID-1 Volumes, in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

Chapter 2 Solaris Live Upgrade (Overview)

This chapter describes the Solaris Live Upgrade process.


Note –

This book uses the term slice, but some Solaris documentation and programs might refer to a slice as a partition.


Solaris Live Upgrade Introduction


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For an overview of migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 11, Solaris Live Upgrade and ZFS (Overview).


Solaris Live Upgrade provides a method of upgrading a system while the system continues to operate. While your current boot environment is running, you can duplicate the boot environment, then upgrade the duplicate. Or, rather than upgrading, you can install a Solaris Flash archive on a boot environment. The original system configuration remains fully functional and unaffected by the upgrade or installation of an archive. When you are ready, you can activate the new boot environment by rebooting the system. If a failure occurs, you can quickly revert to the original boot environment with a simple reboot. This switch eliminates the normal downtime of the test and evaluation process.

Solaris Live Upgrade enables you to duplicate a boot environment without affecting the currently running system. You can then do the following:

Some understanding of basic system administration is necessary before using Solaris Live Upgrade. For background information about system administration tasks such as managing file systems, mounting, booting, and managing swap, see the System Administration Guide: Devices and File Systems.

Solaris Live Upgrade Process

The following overview describes the tasks necessary to create a copy of the current boot environment, upgrade the copy, and switch the upgraded copy to become the active boot environment. The fallback process of switching back to the original boot environment is also described. Figure 2–1 describes this complete Solaris Live Upgrade process.

Figure 2–1 Solaris Live Upgrade Process

The context describes the illustration.

The following sections describe the Solaris Live Upgrade process.

  1. A new boot environment can be created on a physical slice or a logical volume:

  2. Upgrading a Boot Environment

  3. Activating a Boot Environment

  4. Falling Back to the Original Boot Environment

Creating a Boot Environment

The process of creating a boot environment provides a method of copying critical file systems from an active boot environment to a new boot environment. The disk is reorganized if necessary, file systems are customized, and the critical file systems are copied to the new boot environment.

File System Types

Solaris Live Upgrade distinguishes between two file system types: critical file systems and shareable. The following table describes these file system types.

File System Type 

Description  

Examples and More Information 

Critical file systems 

Critical file systems are required by the Solaris OS. These file systems are separate mount points in the vfstab of the active and inactive boot environments. These file systems are always copied from the source to the inactive boot environment. Critical file systems are sometimes referred to as nonshareable.

Examples are root (/), /usr, /var, or /opt.

Shareable file systems 

Shareable file systems are user-defined files such as /export that contain the same mount point in the vfstab in both the active and inactive boot environments. Therefore, updating shared files in the active boot environment also updates data in the inactive boot environment. When you create a new boot environment, shareable file systems are shared by default. But you can specify a destination slice and then the file systems are copied.

/export is an example of a file system that can be shared.

For more detailed information about shareable file systems, see Guidelines for Selecting Slices for Shareable File Systems.

Swap 

  • For UFS file systems, swap is a special shareable volume. Like a shareable file system, all swap slices are shared by default. But, if you specify a destination directory for swap, the swap slice is copied.

  • For ZFS file systems, swap and dump volumes are shared within the pool.

Creating RAID-1 Volumes on File Systems

Solaris Live Upgrade can create a boot environment with RAID-1 volumes (mirrors) on file systems. For an overview, see Creating a Boot Environment With RAID-1 Volume File Systems.

Copying File Systems

The process of creating a new boot environment begins by identifying an unused slice where a critical file system can be copied. If a slice is not available or a slice does not meet the minimum requirements, you need to format a new slice.

After the slice is defined, you can reconfigure the file systems on the new boot environment before the file systems are copied into the directories. You reconfigure file systems by splitting and merging them, which provides a simple way of editing the vfstab to connect and disconnect file system directories. You can merge file systems into their parent directories by specifying the same mount point. You can also split file systems from their parent directories by specifying different mount points.

After file systems are configured on the inactive boot environment, you begin the automatic copy. Critical file systems are copied to the designated directories. Shareable file systems are not copied, but are shared. The exception is that you can designate some shareable file systems to be copied. When the file systems are copied from the active to the inactive boot environment, the files are directed to the new directories. The active boot environment is not changed in any way.

For procedures to split or merging file systems 

For an overview of creating a boot environment with RAID–1 volume file systems 

Creating a Boot Environment With RAID-1 Volume File Systems

Examples of Creating a New Boot Environment

For UFS file systems, the following figures illustrate various ways of creating new boot environments.

For ZFS file systems, see Chapter 11, Solaris Live Upgrade and ZFS (Overview)

Figure 2–2 shows that critical file system root (/) has been copied to another slice on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. The new boot environment is an exact duplicate with the root (/) file system on a new slice. The /swap volume and /export/home file system are shared by the active and inactive boot environments.

Figure 2–2 Creating an Inactive Boot Environment – Copying the root (/) File System

The context describes the illustration.

Figure 2–3 shows critical file systems that have been split and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. On that slice, the root (/) file system contains the /usr, /var, and /opt directories. In the new boot environment, the root (/) file system is split and /usr and /opt are put on separate slices. The /swap volume and /export/home file system are shared by both boot environments.

Figure 2–3 Creating an Inactive Boot Environment – Splitting File Systems

The context describes the illustration.

Figure 2–4 shows critical file systems that have been merged and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system, /usr, /var, and /opt with each file system on their own slice. In the new boot environment, /usr and /opt are merged into the root (/) file system on one slice. The /swap volume and /export/home file system are shared by both boot environments.

Figure 2–4 Creating an Inactive Boot Environment – Merging File Systems

The context describes the illustration.

Creating a Boot Environment With RAID-1 Volume File Systems

Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems encapsulated in RAID-1 volumes. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. Solaris Live Upgrade enables a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system.

A volume can group disk slices across several disks to transparently appear as a single disk to the OS. Solaris Live Upgrade is limited to creating a boot environment for the root (/) file system that contains single-slice concatenations inside a RAID-1 volume (mirror). This limitation is because the boot PROM is restricted to choosing one slice from which to boot.

How to Manage Volumes With Solaris Live Upgrade

When creating a boot environment, you can use Solaris Live Upgrade to manage the following tasks.

You use the lucreate command with the -m option to create a mirror, detach submirrors, and attach submirrors for the new boot environment.


Note –

If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.


For step-by-step procedures 

To Create a Boot Environment With RAID-1 Volumes (Mirrors)

For an overview of creating RAID-1 volumes when installing 

Chapter 9, Creating RAID-1 Volumes (Mirrors) During Installation (Overview), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

For in-depth information about other complex Solaris Volume Manager configurations that are not supported if you are using Solaris Live Upgrade 

Chapter 2, Storage Management Concepts, in Solaris Volume Manager Administration Guide

Mapping Solaris Volume Manager Tasks to Solaris Live Upgrade

Solaris Live Upgrade manages a subset of Solaris Volume Manager tasks. Table 2–1 shows the Solaris Volume Manager components that Solaris Live Upgrade can manage.

Table 2–1 Classes of Volumes

Term 

Description 

concatenation

A RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a mirror. 

mirror

A RAID-1 volume. See RAID-1 volume. 

RAID-1 volume

A class of volume that replicates data by maintaining multiple copies. A RAID-1 volume is sometimes called a mirror. A RAID-1 volume is composed of one or more RAID-0 volumes that are called submirrors.  

RAID-0 volume

A class of volume that can be a stripe or a concatenation. These components are also called submirrors. A stripe or concatenation is the basic building block for mirrors.  

state database

A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. The state database tracks the location and status of all known state database replicas. 

state database replica 

A copy of a state database. The replica ensures that the data in the database is valid. 

submirror

See RAID-0 volume. 

volume

A group of physical slices or other volumes that appear to the system as a single logical device. A volume is functionally identical to a physical disk in the view of an application or file system. In some command-line utilities, a volume is called a metadevice.  

Examples of Using Solaris Live Upgrade to Create RAID-1 Volumes

The following examples present command syntax for creating RAID-1 volumes for a new boot environment.

Create RAID-1 Volume on Two Physical Disks

Figure 2–5 shows a new boot environment with a RAID-1 volume (mirror) that is created on two physical disks. The following command created the new boot environment and the mirror.


# lucreate -n second_disk -m /:/dev/md/dsk/d30:mirror,ufs \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d31:attach -m /:/dev/dsk/c0t2d0s0,/dev/md/dsk/d32:attach \ 
-m -:/dev/dsk/c0t1d0s1:swap -m -:/dev/dsk/c0t2d0s1:swap

This command performs the following tasks:

Figure 2–5 Create a Boot Environment and Create a Mirror

The context describes the illustration.

Create a Boot Environment and Use the Existing Submirror

Figure 2–6 shows a new boot environment that contains a RAID-1 volume (mirror). The following command created the new boot environment and the mirror.


# lucreate -n second_disk -m /:/dev/md/dsk/d20:ufs,mirror \ 
-m /:/dev/dsk/c0t1d0s0:detach,attach,preserve

This command performs the following tasks:

Figure 2–6 Create a Boot Environment and Use the Existing Submirror

The illustration provides the context.

Upgrading a Boot Environment

After you have created a boot environment, you can perform an upgrade on the boot environment. As part of that upgrade, the boot environment can contain RAID-1 volumes (mirrors) for any file systems. Or the boot environment can have non-global zones installed. The upgrade does not affect any files in the active boot environment. When you are ready, you activate the new boot environment, which then becomes the current boot environment.

For procedures about upgrading a boot environment for UFS file systems 

Chapter 5, Upgrading With Solaris Live Upgrade (Tasks)

For an example of upgrading a boot environment with a RAID–1 volume file system for UFS file systems 

Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror)

For procedures about upgrading with non-global zones for UFS file systems 

Chapter 8, Upgrading the Solaris OS on a System With Non-Global Zones Installed

For upgrading ZFS file systems or migrating to a ZFS file system 

Chapter 11, Solaris Live Upgrade and ZFS (Overview)

Figure 2–7 shows an upgrade to an inactive boot environment.

Figure 2–7 Upgrading an Inactive Boot Environment

The context describes the illustration.

Rather than an upgrade, you can install a Solaris Flash archive on a boot environment. The Solaris Flash installation feature enables you to create a single reference installation of the Solaris OS on a system. This system is called the master system. Then, you can replicate that installation on a number of systems that are called clone systems. In this situation, the inactive boot environment is a clone. When you install the Solaris Flash archive on a system, the archive replaces all the files on the existing boot environment as an initial installation would.

For procedures about installing a Solaris Flash archive, see Installing Solaris Flash Archives on a Boot Environment.

The following figures show an installation of a Solaris Flash archive on an inactive boot environment. Figure 2–8 shows a system with a single hard disk. Figure 2–9 shows a system with two hard disks.

Figure 2–8 Installing a Solaris Flash Archive on a Single Disk

The context describes the illustration.

Figure 2–9 Installing a Solaris Flash Archive on Two Disks

The context describes the illustration.

Activating a Boot Environment

When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain system files and directories are copied from the last-active boot environment to the boot environment being booted. When you reboot the system, the configuration that you installed on the new boot environment is active. The original boot environment then becomes an inactive boot environment.

For procedures about activating a boot environment 

Activating a Boot Environment

For information about synchronizing the active and inactive boot environment 

Synchronizing Files Between Boot Environments

Figure 2–10 shows a switch after a reboot from an inactive to an active boot environment.

Figure 2–10 Activating an Inactive Boot Environment

The context describes the illustration.

Falling Back to the Original Boot Environment

If a failure occurs, you can quickly fall back to the original boot environment with an activation and reboot. The use of fallback takes only the time to reboot the system, which is much quicker than backing up and restoring the original. The new boot environment that failed to boot is preserved. The failure can then be analyzed. You can only fall back to the boot environment that was used by luactivate to activate the new boot environment.

You fall back to the previous boot environment the following ways:

Problem 

Action 

The new boot environment boots successfully, but you are not happy with the results. 

Run the luactivate command with the name of the previous boot environment and reboot.


x86 only –

Starting with the Solaris 10 1/06 release, you can fall back by selecting the original boot environment that is found on the GRUB menu. The original boot environment and the new boot environment must be based on the GRUB software. Booting from the GRUB menu does not synchronize files between the old and new boot environments. For more information about synchronizing files, see Forcing a Synchronization Between Boot Environments.


The new boot environment does not boot. 

Boot the fallback boot environment in single-user mode, run the luactivate command, and reboot.

You cannot boot in single-user mode. 

Perform one of the following: 

  • Boot from DVD or CD media or a net installation image

  • Mount the root (/) file system on the fallback boot environment

  • Run the luactivate command and reboot

For procedures to fall back, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Figure 2–11 shows the switch that is made when you reboot to fallback.

Figure 2–11 Fallback to the Original Boot Environment

The context describes the illustration.

Maintaining a Boot Environment

You can also do various maintenance activities such as checking status, renaming, or deleting a boot environment. For maintenance procedures, see Chapter 7, Maintaining Solaris Live Upgrade Boot Environments (Tasks).

Chapter 3 Solaris Live Upgrade (Planning)

This chapter provides guidelines and requirements for review before installing and using Solaris Live Upgrade. You also should review general information about upgrading in Upgrade Planning in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For planning information for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 12, Solaris Live Upgrade for ZFS (Planning).


This chapter contains the following sections:

Solaris Live Upgrade Requirements

Before you install and use Solaris Live Upgrade, become familiar with these requirements.

Solaris Live Upgrade System Requirements

Solaris Live Upgrade is included in the Solaris software. You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 5/09 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 5/09 release.

Table 3–1 lists releases that are supported by Solaris Live Upgrade.

Table 3–1 Supported Solaris Releases

Your Current Release 

Compatible Upgrade Release 

Solaris 8 OS 

Solaris 8, 9, or any Solaris 10 release 

Solaris 9 OS 

Solaris 9 or any Solaris 10 release 

Solaris 10 OS 

Any Solaris 10 release 

Installing Solaris Live Upgrade

You can install the Solaris Live Upgrade packages by using the following:

Be aware that the following patches might need to be installed for the correct operation of Solaris Live Upgrade.

Description 

For More Information 

Caution: Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.


x86 only –

If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not executable: 
</sbin/biosdev>.
ERROR: One or more patches required 
by Live Upgrade has not been installed.

The patches listed in info doc 206844 (formerly 72099) are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

If you are running the Solaris 8 or 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com.

For instructions about installing the Solaris Live Upgrade software, see Installing Solaris Live Upgrade.

Required Packages

If you have problems with Solaris Live Upgrade, you might be missing packages. In the following table, check that your OS has the listed packages , which are required to use Solaris Live Upgrade.

For the Solaris 10 release:

For information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Table 3–2 Required Packages for Solaris Live Upgrade

Solaris 8 Release 

Solaris 9 Release 

Solaris 10 Release 

SUNWadmap 

SUNWadmap 

SUNWadmap 

SUNWadmc 

SUNWadmc 

SUNWadmlib-sysid 

SUNWlibC 

SUNWadmfw 

SUNWadmr 

SUNWbzip 

SUNWlibC 

SUNWlibC 

SUNWgzip 

SUNWgzip 

For Solaris 10 3/05 only: SUNWgzip

SUNWj2rt 


Note –

The SUNWj2rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


SUNWj2rt  


Note –

The SUNWj2rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


SUNWj5rt 


Note –

The SUNWj5rt package is needed only under the following conditions:

  • When you run the Solaris Live Upgrade installer to add Solaris Live Upgrade packages

  • When you upgrade and use CD media


To check for packages on your system, type the following command.


% pkginfo package_name

Solaris Live Upgrade Disk Space Requirements

Follow general disk space requirements for an upgrade. See Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

To estimate the file system size that is needed to create a boot environment, start the creation of a new boot environment. The size is calculated. You can then abort the process.

The disk on the new boot environment must be able to serve as a boot device. Some systems restrict which disks can serve as a boot device. Refer to your system's documentation to determine if any boot restrictions apply.

The disk might need to be prepared before you create the new boot environment. Check that the disk is formatted properly:

Solaris Live Upgrade Requirements if Creating RAID-1 Volumes (Mirrors)

Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems that are RAID-1 volumes (mirrors). Solaris Live Upgrade does not implement the full functionality of Solaris Volume Manager, but does require the following components of Solaris Volume Manager.

Table 3–3 Required Components for Solaris Live Upgrade and RAID-1 Volumes

Requirement  

Description 

For More Information 

You must create at least one state database and at least three state database replicas.  

A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. When a state database is copied, the replica protects against data loss from single points of failure. 

For information about creating a state database, see Chapter 6, State Database (Overview), in Solaris Volume Manager Administration Guide.

Solaris Live Upgrade supports only a RAID-1 volume (mirror) with single-slice concatenations on the root (/) file system.

A concatenation is a RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a RAID-1 volume 

A RAID—1 volume can be comprised of a maximum of three concatenations.  

For guidelines about creating mirrored file systems, see Guidelines for Selecting Slices for Mirrored File Systems.

Upgrading a System With Packages or Patches

You can use Solaris Live Upgrade to add patches and packages to a system. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command. When you use luupgrade command, you can also use a Solaris Flash archive to install patches or packages.


Caution – Caution –

When upgrading and adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software during an upgrade to fail or alter the active boot environment.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


Type of Installation 

Description 

For More Information 

Adding patches to a boot environment  

Create a new boot environment and use the luupgrade command with the -t option.

To Add Patches to a Network Installation Image on a Boot Environment

Adding packages to a boot environment 

Use the luupgrade command with the -p option.

To Add Packages to a Network Installation Image on a Boot Environment

Using Solaris Live Upgrade to install a Solaris Flash archive 

An archive contains a complete copy of a boot environment with new packages and patches already included. This copy can be installed on multiple systems. 

Upgrading and Patching Limitations

For upgrading and patching limitations, see Upgrading and Patching Limitations in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Guidelines for Creating File Systems With the lucreate Command

The lucreate -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. When using the -m option to create file systems, follow these guidelines:

Guidelines for Selecting Slices for File Systems

When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.

Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.

For Solaris Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.

Guidelines for Selecting a Slice for the root (/) File System

When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:

Guidelines for Selecting Slices for Mirrored File Systems

You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:

When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:


Note –

If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Solaris Live Upgrade Running Veritas VxVm.


General Guidelines When Creating RAID-1 Volumes (Mirrored) File Systems

Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.

For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Checking Status of Volumes

If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).

Detaching Volumes and Resynchronizing Mirrors

If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.

Resynchronization is the process of copying data from one submirror to another submirror after the following problems:

For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.

Using Solaris Volume Manager Commands

Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.

However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Solaris Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.

Guidelines for Selecting a Slice for a Swap Volume

These guidelines contain configuration recommendations and examples for a swap slice.

Configuring Swap for the New Boot Environment

You can configure a swap slice in three ways by using the lucreate command with the -m option:

The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.

Failed Boot Environment Creation if Swap is in Use

A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.

Guidelines for Selecting Slices for Shareable File Systems

Solaris Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.

Reconfiguring a disk 

Examples 

For More Information 

You can reslice the disk before creating the new boot environment and put the shareable file system on its own slice.  

For example, if the root (/) file system, /var, and /home are on the same slice, reconfigure the disk and put /home on its own slice. When you create any new boot environments, /home is shared with the new boot environment by default.

format(1M)

If you want to share a directory, the directory must be split off to its own slice. The directory is then a file system that can be shared with another boot environment. You can use the lucreate command with the -m option to create a new boot environment and split a directory off to its own slice. But, the new file system cannot yet be shared with the original boot environment. You need to run the lucreate command with the -m option again to create another boot environment. The two new boot environments can then share the directory.

For example, if you wanted to upgrade from the Solaris 9 release to the Solaris 10 5/09 release and share /home, you could run the lucreate command with the -m option. You could create a Solaris 9 release with /home as a separate file system on its own slice. Then run the lucreate command with the -m option again to duplicate that boot environment. This third boot environment can then be upgraded to the Solaris 10 5/09 release. /home is shared between the Solaris 9 and Solaris 10 5/09 releases.

For a description of shareable and critical file systems, see File System Types.

Customizing a New Boot Environment's Content

When you create a new boot environment, some directories and files can be excluded from a copy to the new boot environment. If you have excluded a directory, you can also reinstate specified subdirectories or files under the excluded directory. These subdirectories or files that have been restored are then copied to the new boot environment. For example, you could exclude from the copy all files and directories in /etc/mail, but include all files and directories in /etc/mail/staff. The following command copies the staff subdirectory to the new boot environment.


# lucreate -n second_disk -x /etc/mail -y /etc/mail/staff

Caution – Caution –

Use the file-exclusion options with caution. Do not remove files or directories that are required by the system.


The following table lists the lucreate command options for removing and restoring directories and files.

How Specified? 

Exclude Options  

Include Options 

Specify the name of the directory or file 

-x exclude_dir

-y include_dir

Use a file that contains a list 

-f list_filename

-z list_filename

-Y list_filename

-z list_filename

For examples of customizing the directories and files when creating a boot environment, see To Create a Boot Environment and Customize the Content.

Synchronizing Files Between Boot Environments

When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain critical system files and directories might be copied from the last-active boot environment to the boot environment being booted. Those files and directories that have changed are copied.

Adding Files to the /etc/lu/synclist

Solaris Live Upgrade checks for critical files that have changed. If these files' content is not the same in both boot environments, they are copied from the active boot environment to the new boot environment. Synchronizing is meant for critical files such as /etc/passwd or /etc/group files that might have changed since the new boot environment was created.

The /etc/lu/synclist file contains a list of directories and files that are synchronized. In some instances, you might want to copy other files from the active boot environment to the new boot environment. You can add directories and files to /etc/lu/synclist if necessary.

Adding files not listed in the /etc/lu/synclist could cause a system to become unbootable. The synchronization process only copies files and creates directories. The process does not remove files and directories.

The following example of the /etc/lu/synclist file shows the standard directories and files that are synchronized for this system.


/var/mail                    OVERWRITE
/var/spool/mqueue            OVERWRITE
/var/spool/cron/crontabs     OVERWRITE
/var/dhcp                    OVERWRITE
/etc/passwd                  OVERWRITE
/etc/shadow                  OVERWRITE
/etc/opasswd                 OVERWRITE
/etc/oshadow                 OVERWRITE
/etc/group                   OVERWRITE
/etc/pwhist                  OVERWRITE
/etc/default/passwd          OVERWRITE
/etc/dfs                     OVERWRITE
/var/log/syslog              APPEND
/var/adm/messages            APPEND

Examples of directories and files that might be appropriate to add to the synclist file are the following:


/var/yp                    OVERWRITE
/etc/mail                  OVERWRITE
/etc/resolv.conf           OVERWRITE
/etc/domainname            OVERWRITE

The synclist file entries can be files or directories. The second field is the method of updating that occurs on the activation of the boot environment. You can choose from three methods to update files:

Forcing a Synchronization Between Boot Environments

The first time you boot from a newly created boot environment, Solaris Live Upgrade synchronizes the new boot environment with the boot environment that was last active. After this initial boot and synchronization, Solaris Live Upgrade does not perform a synchronization unless requested. To force a synchronization, you use the luactivate command with the -s option.

You might want to force a synchronization if you are maintaining multiple versions of the Solaris OS. You might want changes in files such as email or passwd/group to be in the boot environment you are activating to. If you force a synchronization, Solaris Live Upgrade checks for conflicts between files that are subject to synchronization. When the new boot environment is booted and a conflict is detected, a warning is issued and the files are not synchronized. Activation can be completed successfully, despite such a conflict. A conflict can occur if you make changes to the same file on both the new boot environment and the active boot environment. For example, you make changes to the /etc/passwd file on the original boot environment. Then you make other changes to /etc/passwd file on the new boot environment. The synchronization process cannot choose which file to copy for the synchronization.


Caution – Caution –

Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 5/09 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 5/09 files might not be compatible with the Solaris 9 files.


Booting Multiple Boot Environments

If your system has more than one OS installed on the system, you can boot from these boot environments for both SPARC and x86 platforms. The boot environments available for booting include Solaris Live Upgrade inactive boot environments.

On both SPARC and x86 based systems, each ZFS root pool has a dataset designated as the default root file system. If for SPARC, you type the boot command or for x86, you take the default from the GRUB menu, then this default root file system is booted.


Note –

If the GRUB menu has been explicitly modified to designate a default menu item other than the one set by Solaris Live Upgrade, then selecting that default menu entry might not result in the booting of the pool's default root file system.


For more information about booting and modifying the GRUB boot menu, see the following references.

Task 

Information 

To activate a boot environment with the GRUB menu 

x86: To Activate a Boot Environment With the GRUB Menu

To fall back to the original boot environment with a GRUB menu 

x86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu

For SPARC and x86 information and step-by-step procedures for booting and modifying boot behavior 

System Administration Guide: Basic Administration

For an overview and step-by-step procedures for booting ZFS boot environments 

Booting From a ZFS Root File System in Solaris ZFS Administration Guide

Solaris Live Upgrade Character User Interface

Sun no longer recommends use of the lu command. The lu command displays a character user interface (CUI). The underlying command sequence for the CUI, typically the lucreate, luupgrade, and luactivate commands, is straightforward to use. Procedures for these commands are provided in the following chapters.

Chapter 4 Using Solaris Live Upgrade to Create a Boot Environment (Tasks)

This chapter explains how to install Solaris Live Upgrade packages and patches and to create a boot environment.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.


This chapter contains the following sections:

Task Map: Installing Solaris Live Upgrade and Creating Boot Environments

Table 4–1 Task Map: Using Solaris Live Upgrade

Task  

Description 

For Instructions 

Install Solaris Live Upgrade packages 

Install packages on your OS 

Installing Solaris Live Upgrade

Install patches on your system 

Solaris Live Upgrade requires a limited set of patch revisions 

Installing Patches Needed by Solaris Live Upgrade

Create a boot environment 

Copy and reconfigure file systems to an inactive boot environment 

Creating a New Boot Environment

Installing Solaris Live Upgrade

Before running Solaris Live Upgrade, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolveSM info doc 206844. You need to install the Solaris Live Upgrade packages on your current OS and remove old packages. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 5/09 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 5/09 release. The patches listed in Sunsolve info doc 206844 also need to be installed. The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

The Sunsolve info doc 206844 describes how to remove old packages and install new packages, as well as lists the required patches. The procedures below provided more description for the procedures described in info doc 206844.

ProcedureTo Install Solaris Live Upgrade With the pkgadd Command

You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD or use the pkgadd command. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve info doc for more information.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Remove existing Solaris Live Upgrade packages.

    The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


    # pkgrm SUNWlucfg SUNWluu SUNWlur
    
  3. Install the packages in the following order.


    # pkgadd -d path_to_packages SUNWlucfg SUNWlur SUNWluu   
    
    path_to_packages

    Specifies the absolute path to the software packages.

  4. Verify that the package has been installed successfully.


    # pkgchk -v SUNWlucfg SUNWlur SUNWluu
    

ProcedureTo Install Solaris Live Upgrade With the Solaris Installation Program

You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve info doc for more information.


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Remove existing Solaris Live Upgrade packages.

    The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


    # pkgrm SUNWlucfg SUNWluu SUNWlur
    
  3. Insert the Solaris Operating System DVD or Solaris Software - 2 CD.

  4. Run the installer for the media you are using.

    • If you are using the Solaris Operating System DVD, change directories to the installer and run the installer.


      # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
      # ./liveupgrade20
      

      The Solaris installation program GUI is displayed. If you are using a script, you can prevent the GUI from displaying by using the -noconsole and -nodisplay options.

    • If you are using the Solaris Software - 2 CD, run the installer.


      % ./installer
      

      The Solaris installation program GUI is displayed.

  5. From the Select Type of Install panel, click Custom.

  6. On the Locale Selection panel, click the language to be installed.

  7. Choose the software to install.

    • For DVD, on the Component Selection panel, click Next to install the packages.

    • For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click on the other software choices to deselect them.

  8. Follow the directions on the Solaris installation program panels to install the software.

    You are ready to install the required patches.

Installing Patches Needed by Solaris Live Upgrade

Description 

For More Information 


Caution – Caution –

Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches.



x86 only –

If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.


ERROR: Cannot find or is not 
executable: </sbin/biosdev>.
ERROR: One or more patches required by 
Live Upgrade has not been installed.

The patches listed in info doc 206844 (formerly 72099) are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. 

Ensure you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. 

To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available at http://sunsolve.sun.com.

ProcedureTo Install Required Patches

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches.

  3. From the SunSolve web site, obtain the list of patches.

  4. Change to the patch directory as in this example.


    # cd /var/tmp/lupatches
    
  5. Install the patches with the patchadd command.


    # patchadd path_to_patches patch-id  patch-id
    

    patch-id is the patch number or numbers. Separate multiple patch names with a space.


    Note –

    The patches need to be applied in the order specified in infodoc 206844.


  6. Reboot the system if necessary. Certain patches require a reboot to be effective.

    x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


    # init 6
    

    You now have the packages and patches necessary for a successful creation of a new boot environment.

Creating a New Boot Environment

Creating a boot environment provides a method of copying critical file systems from the active boot environment to a new boot environment. The lucreate command enables reorganizing a disk if necessary, customizing file systems, and copying the critical file systems to the new boot environment.

Before file systems are copied to the new boot environment, they can be customized so that critical file system directories are either merged into their parent directory or split from their parent directory. User-defined (shareable) file systems are shared between boot environments by default. But shareable file systems can be copied if needed. Swap, which is a shareable volume, can be split and merged also. For an overview of critical and shareable file systems, see File System Types.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.


ProcedureTo Create a Boot Environment for the First Time

The lucreate command that is used with the -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into the one file system that is specified by the -m option. If you specify the -m option twice, you create two file systems. When using the -m option to create file systems, follow these guidelines:

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To create the new boot environment, type:


    # lucreate [-A 'BE_description'] -c BE_name \
     -m mountpoint:device[,metadevice]:fs_options [-m ...] -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -c BE_name

    Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.

    The default name is chosen according to the following criteria:

    • If the physical boot device can be determined, then the base name of the physical boot device is used to name the current boot environment.

      For example, if the physical boot device is /dev/dsk/c0t0d0s0, then the current boot environment is given the name c0t0d0s0.

    • If the physical boot device cannot be determined, then names from the uname command with the -s and -r options are combined to produce the name.

      For example, if the uname -s returns the OS name of SunOS and the uname -r returns the release name of 5.9, then the name SunOS5.9 is given to the current boot environment.

    • If both of the above cannot determine the name, then the name current is used to name the current boot environment.


    Note –

    If you use the -c option after the first boot environment creation, the option is ignored or an error message is displayed.

    • If the name specified is the same as the current boot environment name, the option is ignored.

    • If the name specified is different than the current boot environment name, then an error message is displayed and the creation fails. The following example shows a boot environment name that causes an error message.


      # lucurr 
      c0t0d0s0
      # lucreate -c /dev/dsk/c1t1d1s1 -n newbe -m /:/dev/dsk/c1t1d1s1:ufs
      ERROR: current boot environment name is c0t0d0s0: cannot change
      name using <-c c1t1d1s1>

    -m mountpoint:device[,metadevice]:fs_options [-m ...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–1 Creating a Boot Environment

In this example, the active boot environment is named first_disk. The mount points for the file systems are noted by using the -m option. Two file systems are created, root (/) and /usr. The new boot environment is named second_disk. A description, mydescription, is associated with the name second_disk. Swap, in the new boot environment second_disk, is automatically shared from the source, first_disk.


# lucreate -A 'mydescription' -c first_disk  -m /:/dev/dsk/c0t4d0s0:ufs \
-m /usr:/dev/dsk/c0t4d0s3:ufs  -n second_disk

ProcedureTo Create a Boot Environment and Merge File Systems


Note –

You can use the lucreate command with the -m option to specify which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into one file system. If you specify the -m option twice, you create two file systems.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucreate  -A 'BE_description' \ 
    -m mountpoint:device[,metadevice]:fs_options \ 
    -m [...] -m mountpoint:merged:fs_options -n BE_name
    
    -A BE_description

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–2 Creating a Boot Environment and Merging File Systems

In this example, the file systems on the current boot environment are root (/), /usr, and /opt. The /opt file system is combined with its parent file system /usr. The new boot environment is named second_disk. A description, mydescription, is associated with the name second_disk.


# lucreate -A 'mydescription' -c first_disk \
 -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \
 -m /usr/opt:merged:ufs -n second_disk

ProcedureTo Create a Boot Environment and Split File Systems


Note –

When creating file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from making invalid configurations on critical file systems. For example, you could enter an lucreate command that would create separate file systems for root (/) and /kernel, which is an an invalid division of the root (/) file system.


When splitting a directory into multiple mount points, hard links are not maintained across file systems. For example, if /usr/stuff1/file is hard linked to /usr/stuff2/file, and /usr/stuff1 and /usr/stuff2 are split into separate file systems, the link between the files no longer exists. lucreate issues a warning message and a symbolic link is created to replace the lost hard link.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucreate [-A 'BE_description'] \
     -m mountpoint:device[,metadevice]:fs_options \ 
    -m mountpoint:device[,metadevice]:fs_options -n new_BE
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.


Example 4–3 Creating a Boot Environment and Splitting File Systems

In this example, the preceding command splits the root (/) file system over multiple disk slices in the new boot environment. Assume a source boot environment that has /usr, /var, and /opt on root (/): /dev/dsk/c0t0d0s0 /.

On the new boot environment, separate /usr, /var, and /opt, mounting these file systems on their own slices, as follows:

/dev/dsk/c0t1d0s0 /

/dev/dsk/c0t1d0s1 /var

/dev/dsk/c0t1d0s7 /usr

/dev/dsk/c0t1d0s5 /opt

A description, mydescription, is associated with the boot environment name second_disk.


# lucreate -A 'mydescription' -c first_disk \
 -m /:/dev/dsk/c0t1d0s0:ufs -m /usr:/dev/dsk/c0t1d0s7:ufs  \ 
-m /var:/dev/dsk/c0t1d0s1:ufs -m /opt:/dev/dsk/c0t1d0s5:ufs \ 
-n second_disk

When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


ProcedureTo Create a Boot Environment and Reconfiguring Swap

Swap slices are shared between boot environments by default. By not specifying swap with the -m option, your current and new boot environment share the same swap slices. If you want to reconfigure the new boot environment's swap, use the -m option to add or remove swap slices in the new boot environment.


Note –

The swap slice cannot be in use by any boot environment except the current boot environment or if the -s option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment, whether it is a swap, UFS, or any other file system.

You can create a boot environment with the existing swap slices and then edit the vfstab file after the creation.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucreate  [-A 'BE_description'] \
     -m mountpoint:device[,metadevice]:fs_options \ 
    -m -:device:swap -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique.

    The new boot environment is created with swap moved to a different slice or device.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–4 Creating a Boot Environment and Reconfiguring Swap

In this example, the current boot environment contains root (/) on /dev/dsk/c0t0d0s0 and swap is on /dev/dsk/c0t0d0s1. The new boot environment copies root (/) to /dev/dsk/c0t4d0s0 and uses both /dev/dsk/c0t0d0s1 and /dev/dsk/c0t4d0s1 as swap slices. A description, mydescription, is associated with the boot environment name second_disk.


# lucreate -A 'mydescription' -c first_disk \ 
-m /:/dev/dsk/c0t4d0s0:ufs -m -:/dev/dsk/c0t0d0s1:swap \ 
-m -:/dev/dsk/c0t4d0s1:swap -n second_disk 

These swap assignments are effective only after booting from second_disk. If you have a long list of swap slices, use the -M option. See To Create a Boot Environment and Reconfigure Swap by Using a List.


ProcedureTo Create a Boot Environment and Reconfigure Swap by Using a List

If you have a long list of swap slices, create a swap list. lucreate uses this list for the swap slices in the new boot environment.


Note –

The swap slice cannot be in use by any boot environment except the current boot environment or if the -s option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment, whether the swap slice contains a swap, UFS, or any other file system.


  1. Create a list of swap slices to be used in the new boot environment. The location and name of this file is user defined. In this example, the content of the /etc/lu/swapslices file is a list of devices and slices:


    -:/dev/dsk/c0t3d0s2:swap
    -:/dev/dsk/c0t3d0s2:swap
    -:/dev/dsk/c0t4d0s2:swap
    -:/dev/dsk/c0t5d0s2:swap
    -:/dev/dsk/c1t3d0s2:swap
    -:/dev/dsk/c1t4d0s2:swap
    -:/dev/dsk/c1t5d0s2:swap
  2. Type:


    # lucreate  [-A 'BE_description'] \
     -m mountpoint:device[,metadevice]:fs_options \
    -M slice_list  -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -M slice_list

    List of -m options, which are collected in the file slice_list. Specify these arguments in the format that is specified for -m. Comment lines, which begin with a hash mark (#), are ignored. The -M option is useful when you have a long list of file systems for a boot environment. Note that you can combine -m and -M options. For example, you can store swap slices in slice_list and specify root (/) and /usr slices with -m.

    The -m and -M options support the listing of multiple slices for a particular mount point. In processing these slices, lucreate skips any unavailable slices and selects the first available slice.

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–5 Create a Boot Environment and Reconfiguring Swap By Using a List

In this example, swap in the new boot environment is the list of slices that are noted in the /etc/lu/swapslices file. A description, mydescription, is associated with the name second_disk.


# lucreate -A 'mydescription' -c first_disk \ 
-m /:/dev/dsk/c02t4d0s0:ufs -m /usr:/dev/dsk/c02t4d0s1:ufs \ 
-M /etc/lu/swapslices -n second_disk 

ProcedureTo Create a Boot Environment and Copy a Shareable File System

If you want a shareable file system to be copied to the new boot environment, specify the mount point to be copied with the -m option. Otherwise, shareable file systems are shared by default, and maintain the same mount point in the vfstab file. Any updating that is applied to the shareable file system is available to both boot environments.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the boot environment.


    # lucreate [-A 'BE_description'] \ 
    -m mountpoint:device[,metadevice]:fs_options \ 
    -m mountpoint:device[,metadevice]:fs_options  -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique.

    When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–6 Creating a Boot Environment and Copying a Shareable File System

In this example, the current boot environment contains two file systems, root (/) and /home. In the new boot environment, the root (/) file system is split into two file systems, root (/) and /usr. The /home file system is copied to the new boot environment. A description, mydescription, is associated with the boot environment name second_disk.


# lucreate -A 'mydescription' -c first_disk \ 
-m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s3:ufs \
-m /home:/dev/dsk/c0t4d0s4:ufs -n second_disk

ProcedureTo Create a Boot Environment From a Different Source

The lucreate command creates a boot environment that is based on the file systems in the active boot environment. If you want to create a boot environment based on a boot environment other than the active boot environment, use lucreate with the -s option.


Note –

If you activate the new boot environment and need to fall back, you boot back to the boot environment that was last active, not the source boot environment.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the boot environment.


    # lucreate [-A 'BE_description'] -s source_BE_name 
    -m mountpoint:device[,metadevice]:fs_options -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -s source_BE_name

    Specifies the source boot environment for the new boot environment. The source would not be the active boot environment.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–7 Creating a Boot Environment From a Different Source

In this example, a boot environment is created that is based on the root (/) file system in the source boot environment named third_disk. Third_disk is not the active boot environment. A description, mydescription, is associated with the new boot environment named second_disk.


# lucreate -A 'mydescription' -s third_disk \ 
-m /:/dev/dsk/c0t4d0s0:ufs  -n second_disk

ProcedureTo Create an Empty Boot Environment for a Solaris Flash Archive

The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When using the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems that are specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the empty boot environment.


    # lucreate -A 'BE_name' -s - \ 
    -m mountpoint:device[,metadevice]:fs_options -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -s -

    Specifies that an empty boot environment be created.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.


Example 4–8 Creating an Empty Boot Environment for a Solaris Flash Archive

In this example, a boot environment is created but contains no file systems. A description, mydescription, is associated with the new boot environment that is named second_disk.


# lucreate -A 'mydescription' -s - \ 
-m /:/dev/dsk/c0t1d0s0:ufs  -n second_disk

When creation of the empty boot environment is complete, a flash archive can be installed and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).

For an example of creating and populating an empty boot environment, see Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive.

The following image shows the creation of an empty boot environment.

The context describes the illustration.

ProcedureTo Create a Boot Environment With RAID-1 Volumes (Mirrors)

When you create a boot environment, Solaris Live Upgrade uses Solaris Volume Manager technology to create RAID-1 volumes. When creating a boot environment, you can use Solaris Live Upgrade to manage the following tasks.

Before You Begin

To use the mirroring capabilities of Solaris Live Upgrade, you must create a state database and a state database replica. A state database stores information about disk about the state of your Solaris Volume Manager configuration.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To create the new boot environment, type:


    # lucreate [-A 'BE_description']  \ 
    -m mountpoint:device[,metadevice]:fs_options [-m...] \ 
    -n BE_name
    
    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name BE_name. The description can be any length and can contain any characters.

    -m mountpoint:device[,metadevice]:fs_options [-m...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following types of file systems and keywords:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device.

        • mirror creates a RAID–1 volume or mirror on the specified device. In subsequent -m options, you must specify attach to attach at least one concatenation to the new mirror. The specified device must be correctly named. For example, a logical device name of /dev/md/dsk/d10 can serve as a mirror name. For more information about naming devices, see Overview of Solaris Volume Manager Components in Solaris Volume Manager Administration Guide.

        • detach removes a concatenation from a volume that is associated with a specified mount point. The volume does not need to be specified.

        • attach attaches a concatenation to the mirror that is associated with a specified mount point. The physical disk slice that is specified is made into a single device concatenation for attaching to the mirror. To specify a concatenation to attach to a disk, you append a comma and the name of that concatenation to the device name. If you omit the comma and the concatenation name, lucreate selects a free volume for the concatenation.

          lucreate allows you to create only concatenations that contain a single physical slice. This command allows you to attach up to three concatenations to a mirror.

        • preserve saves the existing file system and its content. This keyword enables you to bypass the copying process that copies the content of the source boot environment. Saving the content enables a quick creation of the new boot environment. For a particular mount point, you can use preserve with only one physical device. When you use preserve, lucreate checks that the device's content is suitable for a specified file system. This check is limited and cannot guarantee suitability.

          The preserve keyword can be used with both a physical slice and a Solaris Volume Manager volume.

          • If you use the preserve keyword when the UFS file system is on a physical slice, the content of the UFS file system is saved on the slice. In the following example of the -m option, the preserve keyword saves the content of the physical device c0t0d0s0 as the file system for the mount point for the root (/) file system.


            -m /:/dev/dsk/c0t0d0s0:preserve,ufs
            
          • If you use the preserve keyword when the UFS file system is on a volume, the contents of the UFS file system are saved on the volume.

            In the following example of the -m option, the preserve keyword saves the contents of the RAID-1 volume (mirror) d10 as the file system for the mount point for the root (/) file system.


            -m /:/dev/md/dsk/d10:preserve,ufs
            

            In the following example of the -m option, a RAID-1 volume (mirror) d10 is configured as the file system for the mount point for the root (/) file system. The single-slice concatenation d20 is detached from its current mirror. d20 is attached to mirror d10. The root (/) file system is preserved on submirror d20.


            -m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/md/dsk/d20:detach,attach,preserve
            
    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–9 Creating a Boot Environment With a Mirror and Specifying Devices

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:attach \ 
-m /:/dev/dsk/c0t1c0s0,/dev/md/dsk/d2:attach -n another_disk


Example 4–10 Creating a Boot Environment With a Mirror and Not Specifying a Submirror Name

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0:attach \ 
-m /:/dev/dsk/c0t1d0s0:attach -n another_disk

When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).



Example 4–11 Creating a Boot Environment and Detaching a Submirror

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:detach,attach,preserve \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d2:attach -n another_disk

When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).



Example 4–12 Creating a Boot Environment, Detaching a Submirror, and Saving Its Contents

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d20:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \ 
-n another_disk

When the creation of the new boot environment is complete, the boot environment can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).



Example 4–13 Creating a Boot Environment With Two Mirrors

In this example, the mount points for the file systems are specified by using the -m option.


# lucreate -A 'mydescription' \ 
-m /:/dev/md/dsk/d10:ufs,mirror \ 
-m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:attach \ 
-m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d2:attach \ 
-m /opt:/dev/md/dsk/d11:ufs,mirror \ 
-m /opt:/dev/dsk/c2t0d0s1,/dev/md/dsk/d3:attach \ 
-m /opt:/dev/dsk/c3t1d0s1,/dev/md/dsk/d4:attach -n another_disk

When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


ProcedureTo Create a Boot Environment and Customize the Content

The content of the file system on the new boot environment can be modified by using the following options. Directories and files are not copied to the new boot environment.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To create the new boot environment, type:


    # lucreate -m mountpoint:device[,metadevice]:fs_options [-m ...]  \ 
    [-x exclude_dir] [-y include] \
    [-Y include_list_file] \
    [-f exclude_list_file]\  
    [-z filter_list] [-I] -n BE_name
    
    -m mountpoint:device[,metadevice]:fs_options [-m ...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    -x exclude_dir

    Excludes files and directories by not copying them to the new boot environment. You can use multiple instances of this option to exclude more than one file or directory.

    exclude_dir is the name of the directory or file.

    -y include_dir

    Copies directories and files that are listed to the new boot environment. This option is used when you have excluded a directory, but want to restore individual subdirectories or files.

    include_dir is the name of the subdirectory or file to be included.

    -Y list_filename

    Copies directories and files from a list to the new boot environment. This option is used when you have excluded a directory, but want to restore individual subdirectories or files.

    • list_filename is the full path to a file that contains a list.

    • The list_filename file must contain one file per line.

    • If a line item is a directory, all subdirectories and files beneath that directory are included. If a line item is a file, only that file is included.

    -f list_filename

    Uses a list to exclude directories and files by not copying them to the new boot environment.

    • list_filename is the full path to a file that contains a list.

    • The list_filename file must contain one file per line.

    -z list_filename

    Uses a list to copy directories and files to the new boot environment. Each file or directory in the list is noted with a plus “+” or minus “-”. A plus indicates an included file or directory and the minus indicates an excluded file or directory.

    • list_filename is the full path to a file that contains a list.

    • The list_filename file must contain one file per line. A space must follow the plus or minus before the file name.

    • If a line item is a directory and is indicated with a + (plus), all subdirectories and files beneath that directory are included. If a line item is a file and is indicated with a + (plus), only that file is included.

    -I

    Overrides the integrity check of system files. Use this option with caution.

    To prevent you from removing important system files from a boot environment, lucreate runs an integrity check. This check examines all files that are registered in the system package database and stops the boot environment creation if any files are excluded. Use of this option overrides this integrity check. This option creates the boot environment more quickly, but might not detect problems.

    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


Example 4–14 Creating a Boot Environment and Excluding Files

In this example, the new boot environment is named second_disk. The source boot environment contains one file system, root (/). In the new boot environment, the /var file system is split from the root (/) file system and put on another slice. The lucreate command configures a UFS file system for the mount points root (/) and /var. Also, two /var mail files, root and staff are not copied to the new boot environment. Swap is automatically shared between the source and the new boot environment.


# lucreate -n second_disk \ 
-m /:/dev/dsk/c0t1d0s0:ufs -m /var/mail:/dev/dsk/c0t2d0s0:ufs  \  
-x /var/mail/root -x /var/mail/staff


Example 4–15 Creating a Boot Environment and Excluding and Including Files

In this example, the new boot environment is named second_disk. The source boot environment contains one file system for the OS, root (/). The source also contains a file system that is named /mystuff. lucreate configures a UFS file system for the mount points root (/) and /mystuff. Only two directories in /mystuff are copied to the new boot environment: /latest and /backup. Swap is automatically shared between the source and the new boot environment.


# lucreate -n second_disk \ 
-m /:/dev/dsk/c01t0d0s0:ufs -m /mystuff:/dev/dsk/c1t1d0s0:ufs  \  
-x /mystuff -y /mystuff/latest -y /mystuff/backup

Chapter 5 Upgrading With Solaris Live Upgrade (Tasks)

This chapter explains how to use Solaris Live Upgrade to upgrade and activate an inactive boot environment.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. The usage is the same for the luupgrade and luactivate commands for a ZFS boot environment. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.


This chapter contains the following sections:

Task Map: Upgrading a Boot Environment

Table 5–1 Task Map: Upgrading With Solaris Live Upgrade

Task  

Description 

For Instructions 

Either upgrade a boot environment or install a Solaris Flash archive. 

  • Upgrade the inactive boot environment with an OS image.

  • Install a Solaris Flash archive on an inactive boot environment.

Activate an inactive boot environment. 

Makes changes effective and switches the inactive boot environment to active . 

Activating a Boot Environment

(optional) Switch back if a failure occurs when activating. 

Reactivates to the original boot environment if a failure occurs. 

Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

Upgrading a Boot Environment

Use the luupgrade command to upgrade a boot environment. This section provides the procedure for upgrading an inactive boot environment from files that are located on the following media:

Guidelines for Upgrading

When you upgrade a boot environment with the latest OS, you do not affect the active boot environment. The new files merge with the inactive boot environment critical file systems, but shareable file systems are not changed.

You can upgrade when RAID-1 volumes are installed, or if non-global zones are installed, or you can install a Solaris Flash:

Upgrading a System With Packages or Patches

You can use Solaris Live Upgrade to add patches and packages to a system. Solaris Live Upgrade creates a copy of the currently running system. This new boot environment can be upgraded or you can add packages or patches. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command.


Caution – Caution –

When adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or alter the active boot environment during an upgrade.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


Table 5–2 Upgrading a Boot Environment With Packages and Patches

Type of Installation 

Description 

For More Information 

Adding patches to a boot environment.  

Create a new boot environment and use the luupgrade command with the -t option.

To Add Patches to a Network Installation Image on a Boot Environment

Adding packages to a boot environment. 

Use the luupgrade command with the -p option.

To Add Packages to a Network Installation Image on a Boot Environment

ProcedureTo Upgrade a Network Installation Image on a Boot Environment

To upgrade by using this procedure, you must use a DVD or a network installation image. If the installation requires more than one CD, you must use the procedure To Upgrade a Network Installation Image From Multiple CDs.

  1. Install the Solaris Live Upgrade SUNWlucfg, SUNWlur, and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Indicate the boot environment to upgrade and the path to the installation software by typing:


    # luupgrade -u -n BE_name -s os_image_path
    
    -u

    Upgrades a network installation image on a boot environment

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded

    -s os_image_path

    Specifies the path name of a directory that contains a network installation image


Example 5–1 Upgrading a Network Installation Image on a Boot Environment From DVD Media

In this example, the second_disk boot environment is upgraded by using DVD media. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu
# luupgrade -u -n second_disk -s /cdrom/cdrom0 


Example 5–2 Upgrading a Network Installation Image on a Boot Environment From a Network Installation Image

In this example, the second_disk boot environment is upgraded. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu
# luupgrade -u -n second_disk \ 
-s /net/installmachine/export/Solaris_10/OS_image 

ProcedureTo Upgrade a Network Installation Image From Multiple CDs

Because the network installation image resides on more than one CD, you must use this upgrade procedure. Use the luupgrade command with the -i option to install any additional CDs.

  1. Install the Solaris Live Upgrade SUNWlucfg, SUNWlur, and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Indicate the boot environment to upgrade and the path to the installation software by typing:


    # luupgrade -u -n BE_name -s os_image_path
    
    -u

    Upgrades a network installation image on a boot environment

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded

    -s os_image_path

    Specifies the path name of a directory that contains a network installation image

  4. When the installer is finished with the contents of the first CD, insert the second CD.

  5. This step is identical to the previous step, but the -u option is replaced by the -i option. Also, choose to run the installer on the second CD with menus or with text.

    • This command runs the installer on the second CD with menus.


      # luupgrade -i -n BE_name -s os_image_path
      
    • This command runs the installer on the second CD with text and requires no user interaction.


      # luupgrade -i -n BE_name -s os_image_path -O '-nodisplay -noconsole'
      
    -i

    Installs additional CDs. The software looks for an installation program on the specified medium and runs that program. The installer program is specified with -s.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an network installation image.

    -O '-nodisplay -noconsole'

    (Optional) Runs the installer on the second CD in text mode and requires no user interaction.

  6. Repeat Step 4 and Step 5 for each CD that you want to install.

    The boot environment is ready to be activated. See Activating a Boot Environment.


Example 5–3 SPARC: Upgrading a Network Installation Image From Multiple CDs

In this example, the second_disk boot environment is upgraded and the installation image is on two CDs: the Solaris Software - 1 and the Solaris Software - 2 CDs. The -u option determines if sufficient space for all the packages is on the CD set. The -O option with the -nodisplay and -noconsole options prevents the character user interface from displaying after the reading of the second CD. If you use these options, you are not prompted to type information.

Note: If you do not use the -O option with the -nodisplay and -noconsole options, the character user interface (CUI) is displayed. Sun no longer recommends using the CUI to do Solaris Live Upgrade tasks.

Install the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu

Insert the Solaris Software - 1 CD and type:


# luupgrade -u -n second_disk -s /cdrom/cdrom0/ 

Insert the Solaris Software - 2 CD and type the following.


# luupgrade -i -n second_disk -s /cdrom/cdrom0 -O '-nodisplay \ 
-noconsole'
Repeat this step for each CD that you need.

Repeat the previous step for each CD that you want to install.


ProcedureTo Add Packages to a Network Installation Image on a Boot Environment

In the following procedure, packages are removed from and added to a new boot environment.


Caution – Caution –

When you are upgrading. adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or can alter the active boot environment.

For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To remove a package or set of packages from a new boot environment, type:


    #  luupgrade -P -n second_disk package-name
    
    -P

    Indicates to remove the named package or packages from the boot environment

    -n BE_name

    Specifies the name of the boot environment where the package is to be removed

    package-name

    Specifies the names of the packages to be removed. Separate multiple package names with spaces.

  3. To add a package or a set of packages to the new boot environment, type:


    # luupgrade -p -n second_disk -s /path-to-packages package-name
    
    -p

    Indicates to add packages to the boot environment.

    -n BE_name

    Specifies the name of the boot environment where the package is to be added.

    -s path-to-packages

    Specifies the path to a directory that contains the package or packages that are to be added.

    package-name

    Specifies the names of the package or packages to be added. Separate multiple package names with a space.


Example 5–4 Adding packages to a Network Installation Image on a Boot Environment

In this example, packages are removed then added to the second_disk boot environment.


# luupgrade -P -n second_disk SUNWabc SUNWdef SUNWghi
# luupgrade -p -n second_disk -s /net/installmachine/export/packages \
SUNWijk SUNWlmn SUNWpkr

ProcedureTo Add Patches to a Network Installation Image on a Boot Environment

In the following procedure, patches are removed from and added to a new boot environment.


Caution – Caution –

When you are adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or can alter the active boot environment.



Caution – Caution –

You cannot use Solaris Live Upgrade to patch a Solaris 10 inactive boot environment when the active boot environment is running the Solaris 8 or 9 OS. Solaris Live Upgrade will invoke the patch utilities on the active boot partition to patch the inactive boot partition. The Solaris 8 and Solaris 9 patch utilities are unaware of Solaris Zone), Service Management Facility (SMF), and other enhancements in the Solaris 10 OS. Therefore the patch utilities fail to correctly patch an inactive Solaris 10 boot environment. Therefore, if you are using Solaris Live Upgrade to upgrade a system from the Solaris 8 or Solaris 9 OS to the Solaris 10 OS, you must first activate the Solaris 10 boot environment before patching. After the Solaris 10 boot environment is activated, you can either patch the active boot environment directly or set up another inactive boot environment and patch that one by using Solaris Live Upgrade. For an example of upgrading and patching from the Solaris 8 to the Solaris 10 release, see Restrictions for Using Solaris Live Upgrade.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To remove a patch or set of patches from a new boot environment, type:


    # luupgrade -T -n second_disk patch_name
    
    -T

    Indicates to remove the named patch or patches from the boot environment.

    -n BE_name

    Specifies the name of the boot environment where the patch or patches are to be removed.

    patch-name

    Specifies the names of the patches to be removed. Separate multiple patch names with spaces.

  3. To add a patch or a set of patches to the new boot environment, type the following command.


    # luupgrade -t -n second_disk -s /path-to-patches patch-name
    
    -t

    Indicates to add patches to the boot environment.

    -n BE_name

    Specifies the name of the boot environment where the patch is to be added.

    -s path-to-patches

    Specifies the path to the directory that contains the patches that are to be added.

    patch-name

    Specifies the names of the patch or patches that are to be added. Separate multiple patch names with a space.


Example 5–5 Adding Patches to a Network Installation Image on a Boot Environment

In this example, patches are removed then added to the second_disk boot environment .


# luupgrade -T -n second_disk 222222-01
# luupgrade -t -n second_disk -s /net/installmachine/export/packages \
333333-01 4444444-01

ProcedureTo Obtain Information on Packages Installed on a Boot Environment

The follow procedure checks the integrity of the packages installed on the new boot environment.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To check the integrity of the newly installed packages on the new boot environment, type:


    #  luupgrade -C -n BE_name -O "-v" package-name
    
    -C

    Indicates to run the pkgchk command on the named packages

    -n BE_name

    Specifies the name of the boot environment where the check is to be performed

    -O

    Passes the options directly to the pkgchk command

    package-name

    Specifies the names of the packages to be checked. Separate multiple package names with spaces. If package names are omitted, the check is done on all packages in the specified boot environment.

    -v

    Specifies to run the command in verbose mode


Example 5–6 Checking the Integrity of Packages on a Boot Environment

In this example, the packages SUNWabc, SUNWdef, and SUNWghi are checked to make sure they were installed properly and are not damaged.


# luupgrade -C -n second_disk SUNWabc SUNWdef SUNWghi

Upgrading by Using a JumpStart Profile

You can create a JumpStart profile to use with Solaris Live Upgrade. If you are familiar with the custom JumpStart program, this is the same profile that custom JumpStart uses. The following procedures enable you to create a profile, test the profile, and install by using the luupgrade command with the -j option.


Caution – Caution –

When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:


For more information see the following:

ProcedureTo Create a Profile to be Used by Solaris Live Upgrade

This procedure shows you how to create a profile for use with Solaris Live Upgrade. You can use this profile to upgrade an inactive boot environment by using the luupgrade command with the -j option.

For procedures to use this profile, see the following sections:

  1. Use a text editor to create a text file.

    Name the file descriptively. Ensure that the name of the profile reflects how you intend to use the profile to install the Solaris software on a system. For example, you might name this profile upgrade_Solaris_10.

  2. Add profile keywords and values to the profile.

    Only the upgrade keywords in the following tables can be used in a Solaris Live Upgrade profile.

    The following table lists the keywords you can use with the Install_type keyword values of upgrade or flash_install.

    Keywords for an Initial Archive Creation 

    Description 

    Reference 

    (Required) Install_type

    Defines whether to upgrade the existing Solaris environment on a system or install a Solaris Flash archive on the system. Use the following values with this keyword: 

    • upgrade for an upgrade

    • flash_install for a Solaris Flash installation

    • flash_update for a Solaris Flash differential installation

    For a description of all the values for this keyword, see install_type Profile Keyword (UFS and ZFS) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Required for a Solaris Flash archive) archive_location

    Retrieves a Solaris Flash archive from a designated location.  

    For a list of values that can be used with this keyword, see archive_location Keyword in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) cluster (adding or deleting clusters)

    Designates whether a cluster is to be added or deleted from the software group that is to be installed on the system.  

    For a list of values that can be used with this keyword, see cluster Profile Keyword (Adding Software Groups) (UFS and ZFS) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) geo

    Designates the regional locale or locales that you want to install on a system or to add when upgrading a system.  

    For a list of values that can be used with this keyword, see geo Profile Keyword (UFS and ZFS) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) local_customization

    Before you install a Solaris Flash archive on a clone system, you can create custom scripts to preserve local configurations on the clone system. The local_customization keyword designates the directory where you have stored these scripts. The value is the path to the script on the clone system.

    For information about predeployment and postdeployment scripts, see Creating Customization Scripts in Solaris 10 5/09 Installation Guide: Solaris Flash Archives (Creation and Installation).

    (Optional) locale

    Designates the locale packages you want to install or add when upgrading.  

    For a list of values that can be used with this keyword, see locale Profile Keyword (UFS and ZFS) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) package

    Designates whether a package is to be added to or deleted from the software group that is to be installed on the system.  

    For a list of values that can be used with this keyword, see package Profile Keyword (UFS and ZFS) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    The following table lists the keywords you can use with the Install_type keyword value flash_update.

    Keywords for a Differential Archive Creation 

    Description 

    Reference 

    (Required) Install_type

    Defines the installation to install a Solaris Flash archive on the system. The value for a differential archive is flash_update.

    For a description of all the values for this keyword, see install_type Profile Keyword (UFS and ZFS) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Required) archive_location

    Retrieves a Solaris Flash archive from a designated location.  

    For a list of values that can be used with this keyword, see archive_location Keyword in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) forced_deployment

    Forces the installation of a Solaris Flash differential archive onto a clone system that is different than the software expects. If you use forced_deployment, all new files are deleted to bring the clone system to the expected state. If you are not certain that you want files to be deleted, use the default, which protects new files by stopping the installation.

    For more information about this keyword, see forced_deployment Profile Keyword (Installing Solaris Flash Differential Archives) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) local_customization

    Before you install a Solaris Flash archive on a clone system, you can create custom scripts to preserve local configurations on the clone system. The local_customization keyword designates the directory where you have stored these scripts. The value is the path to the script on the clone system.

    For information about predeployment and postdeployment scripts, see Creating Customization Scripts in Solaris 10 5/09 Installation Guide: Solaris Flash Archives (Creation and Installation).

    (Optional) no_content_check

    When installing a clone system with a Solaris Flash differential archive, you can use the no_content_check keyword to ignore file-by-file validation. File-by-file validation ensures that the clone system is a duplicate of the master system. Avoid using this keyword unless you are sure the clone system is a duplicate of the original master system.

    For more information about this keyword, see no_content_check Profile Keyword (Installing Solaris Flash Archives) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    (Optional) no_master_check

    When installing a clone system with a Solaris Flash differential archive, you can use the no_master_check keyword to ignore a check of files. Clone system files are not checked. A check would ensure the clone was built from the original master system. Avoid using this keyword unless you are sure the clone system is a duplicate of the original master system.

    For more information about this keyword, see no_master_check Profile Keyword (Installing Solaris Flash Archives) in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

  3. Save the profile in a directory on the local system.

  4. Ensure that root owns the profile and that the permissions are set to 644.

  5. Test the profile (optional).

    For a procedure to test the profile, see To Test a Profile to Be Used by Solaris Live Upgrade.


Example 5–7 Creating a Solaris Live Upgrade Profile

In this example, a profile provides the upgrade parameters. This profile is to be used to upgrade an inactive boot environment with the Solaris Live Upgrade luupgrade command and the -u and -j options. This profile adds a package and a cluster. A regional locale and additional locales are also added to the profile. If you add locales to the profile, make sure that you have created a boot environment with additional disk space.

# profile keywords         profile values
# ----------------         -------------------
  install_type             upgrade
  package                  SUNWxwman add
  cluster                  SUNWCacc add
  geo                      C_Europe
  locale                   zh_TW
  locale                   zh_TW.BIG5
  locale                   zh_TW.UTF-8
  locale                   zh_HK.UTF-8
  locale                   zh_HK.BIG5HK
  locale                   zh
  locale                   zh_CN.GB18030
  locale                   zh_CN.GBK
  locale                   zh_CN.UTF-8


Example 5–8 Creating a Solaris Live Upgrade Profile to Install a Differential Archive

The following example of a profile is to be used by Solaris Live Upgrade to install a differential archive on a clone system. Only files that are specified by the differential archive are added, deleted, or changed. The Solaris Flash archive is retrieved from an NFS server. Because the image was built by the original master system, the clone system is not checked for a valid system image. This profile is to be used with the Solaris Live Upgrade luupgrade command and the -u and -j options.

# profile keywords         profile values
# ----------------         -------------------
 install_type              flash_update
 archive_location          nfs installserver:/export/solaris/archive/solarisarchive
 no_master_check

To use the luupgrade command to install the differential archive, see To Install a Solaris Flash Archive With a Profile.


ProcedureTo Test a Profile to Be Used by Solaris Live Upgrade

After you create a profile, use the luupgrade command to test the profile. By looking at the installation output that is generated by luupgrade, you can quickly determine if a profile works as you intended.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Test the profile.


    # luupgrade -u -n BE_name -D -s os_image_path -j profile_path
    
    -u

    Upgrades an operating system image on a boot environment.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -D

    luupgrade command uses the selected boot environment's disk configuration to test the profile options that are passed with the -j option.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -j profile_path

    Path to a profile that is configured for an upgrade. The profile must be in a directory on the local machine.


Example 5–9 Testing a Profile by Using Solaris Live Upgrade

In the following example, the profile is named Flash_profile. The profile is successfully tested on the inactive boot environment that is named second_disk.


# luupgrade -u -n u1b08 -D -s /net/installsvr/export/u1/combined.u1wos \
 -j /var/tmp/flash_profile
Validating the contents of the media /net/installsvr/export/u1/combined.u1wos.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains Solaris version 10.
Locating upgrade profile template to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE second_disk.
Determining packages to install or upgrade for BE second_disk.
Simulating the operating system upgrade of the BE second_disk.
The operating system upgrade simulation is complete.
INFORMATION: var/sadm/system/data/upgrade_cleanup contains a log of the
upgrade operation.
INFORMATION: var/sadm/system/data/upgrade_cleanup contains a log of
cleanup operations required.
The Solaris upgrade of the boot environment second_disk is complete.

You can now use the profile to upgrade an inactive boot environment.


ProcedureTo Upgrade With a Profile by Using Solaris Live Upgrade

This procedure provides step-by-step instructions for upgrading an OS by using a profile.

If you want to install a Solaris Flash archive by using a profile, see To Install a Solaris Flash Archive With a Profile.

If you added locales to the profile, make sure that you have created a boot environment with additional disk space.


Caution – Caution –

When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:


  1. Install the Solaris Live Upgrade SUNWlucfg, SUNWlur, and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Create a profile.

    See To Create a Profile to be Used by Solaris Live Upgrade for a list of upgrade keywords that can be used in a Solaris Live Upgrade profile.

  4. Type:


    # luupgrade -u -n BE_name -s os_image_path -j profile_path
    
    -u

    Upgrades an operating system image on a boot environment.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.

    -j profile_path

    Path to a profile. The profile must be in a directory on the local machine. For information about creating a profile, see To Create a Profile to be Used by Solaris Live Upgrade.


Example 5–10 Upgrading a Boot Environment by Using a Custom JumpStart Profile

In this example, the second_disk boot environment is upgraded by using a profile. The -j option is used to access the profile. The boot environment is then ready to be activated. To create a profile, see To Create a Profile to be Used by Solaris Live Upgrade. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu
# luupgrade -u -n second_disk \ 
-s /net/installmachine/export/solarisX/OS_image \ 
-j /var/tmp/profile 

The boot environment is ready to be activated. See Activating a Boot Environment.


Installing Solaris Flash Archives on a Boot Environment

This section provides the procedure for using Solaris Live Upgrade to install Solaris Flash archives. Installing a Solaris Flash archive overwrites all files on the new boot environment except for shared files. Archives are stored on the following media:

Note the following issues with installing and creating a Solaris Flash archive.

Description 

Example 


Caution – Caution –

When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:

  • archive_location keyword in a profile

  • luupgrade command with -s, -a, -j, and -J options


For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive.  


Caution – Caution –

A Solaris Flash archive cannot be properly created when a non-global zone is installed. The Solaris Flash feature is not compatible with the Solaris Zones feature. If you create a Solaris Flash archive in a non-global zone or create an archive in a global zone that has non-global zones installed, the resulting archive does not install properly when the archive is deployed.


 

Description 

For More Information 

For examples of the correct syntax for paths that are associated with archive storage. 

See archive_location Keyword in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

To use the Solaris Flash installation feature, you install a master system and create the Solaris Flash archive.  

For more information about creating an archive, see Chapter 3, Creating Solaris Flash Archives (Tasks), in Solaris 10 5/09 Installation Guide: Solaris Flash Archives (Creation and Installation).

ProcedureTo Install a Solaris Flash Archive on a Boot Environment

  1. Install the Solaris Live Upgrade SUNWlucfg, SUNWlur, and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Type:


    # luupgrade -f -n BE_name -s os_image_path -a archive
    
    -f

    Indicates to install an operating system from a Solaris Flash archive.

    -n BE_name

    Specifies the name of the boot environment that is to be installed with an archive.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory. This OS image provides a miniroot that boots a minimal, bootable root (/) file system to facilitate the installation of the Solaris Flash archive. The miniroot is not the image that is installed. The -a option provides the operating system image.

    -a archive

    Path to the Solaris Flash archive when the archive is available on the local file system. The operating system image versions that are specified with the -s option and the -a option must be identical.


Example 5–11 Installing Solaris Flash Archives on a Boot Environment

In this example, an archive is installed on the second_disk boot environment. The archive is located on the local system. The -s option provides a miniroot that boots a minimal, bootable root (/) file system to facilitate the installation of the Solaris Flash archive. The miniroot is not the image that is installed. The -a option provides the operating system image. The operating system versions for the -s and -a options are both Solaris 10 5/09 releases. All files are overwritten on second_disk except shareable files. The pkgadd command adds the Solaris Live upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu
# luupgrade -f -n second_disk \ 
-s /net/installmachine/export/Solaris_10/OS_image \ 
-a /net/server/archive/10 

The boot environment is ready to be activated. See Activating a Boot Environment.


ProcedureTo Install a Solaris Flash Archive With a Profile

This procedure provides the steps to install a Solaris Flash archive or differential archive by using a profile.

If you added locales to the profile, make sure that you have created a boot environment with additional disk space.

  1. Install the Solaris Live Upgrade SUNWlucfg, SUNWlur, and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Create a profile.

    See To Create a Profile to be Used by Solaris Live Upgrade for a list of keywords that can be used in a Solaris Live Upgrade profile.

  4. Type:


    # luupgrade -f -n BE_name -s os_image_path -j profile_path
    
    -f

    Indicates to install an operating system from a Solaris Flash archive.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory. This OS image provides a miniroot that boots a minimal, bootable root (/) file system to facilitate the installation of the Solaris Flash archive. The miniroot is not the image that is installed. The -j option provides the path to the profile that contains the Solaris Flash archive operating system image.

    -j profile_path

    Path to a JumpStart profile that is configured for a flash installation. The profile must be in a directory on the local machine. The -s option's operating system version and the Solaris Flash archive operating system version must be identical.

    The boot environment is ready to be activated. See Activating a Boot Environment.


Example 5–12 Install a Solaris Flash archive on a Boot Environment With a Profile

In this example, a profile provides the location of the archive to be installed.

# profile keywords         profile values
# ----------------         -------------------
 install_type              flash_install
 archive_location          nfs installserver:/export/solaris/flasharchive/solarisarchive
 

After creating the profile, you can run the luupgrade command and install the archive. The -s option provides a miniroot that boots a minimal, bootable root (/) file system to facilitate the installation of the Solaris Flash archive. The miniroot is not the image that is installed. The -j option provides the path to the profile that contains the path to the Solaris Flash archive operating system image. The -j option is used to access the profile. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu
# luupgrade -f -n second_disk \ 
-s /net/installmachine/export/solarisX/OS_image \ 
-j /var/tmp/profile 

The boot environment is then ready to be activated. See Activating a Boot Environment.

To create a profile, see To Create a Profile to be Used by Solaris Live Upgrade.


ProcedureTo Install a Solaris Flash Archive With a Profile Keyword

This procedure enables you to install a Solaris Flash archive and use the archive_location keyword at the command line rather than from a profile file. You can quickly retrieve an archive without the use of a profile file.

  1. Install the Solaris Live Upgrade SUNWlucfg, SUNWlur, and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.

  2. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  3. Type:


    # luupgrade -f -n BE_name -s os_image_path -J 'archive_location path-to-profile'
    
    -f

    Specifies to upgrade an operating system from a Solaris Flash archive.

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded.

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory. This OS image provides a miniroot that boots a minimal, bootable root (/) file system to facilitate the installation of the Solaris Flash archive. The miniroot is not the image that is installed. The -j option provides the path to the profile that contains the Solaris Flash archive operating system image.

    -J 'archive_location path-to-profile'

    Specifies the archive_location profile keyword and the path to the JumpStart profile. The -s option's operating system version and the Solaris Flash archive operating system version must be identical. For the keyword values, see archive_location Keyword in Solaris 10 5/09 Installation Guide: Custom JumpStart and Advanced Installations.

    The boot environment is ready to be activated. See Activating a Boot Environment.


Example 5–13 Installing a Solaris Flash Archive By Using a Profile Keyword

In this example, an archive is installed on the second_disk boot environment. The -s option provides a miniroot that boots a minimal, bootable root (/) file system to facilitate the installation of the Solaris Flash archive. The miniroot is not the image that is installed. The -j option provides the path to the Solaris Flash archive operating system image. The -J option and the archive_location keywords are used to retrieve the archive. All files are overwritten on second_disk except shareable files. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.


# pkgadd -d /server/packages SUNWlucfg SUNWlur SUNWluu
# luupgrade -f -n second_disk \ 
-s /net/installmachine/export/solarisX/OS_image \ 
-J 'archive_location http://example.com/myflash.flar' 

Activating a Boot Environment

Activating a boot environment makes it bootable on the next reboot of the system. You can also switch back quickly to the original boot environment if a failure occurs on booting the newly active boot environment. See Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Description 

For More Information 

Use this procedure to activate a boot environment with the luactivate command.


Note –

The first time you activate a boot environment, the luactivate command must be used.


To Activate a Boot Environment

Use this procedure to activate a boot environment and force a synchronization of files.  


Note –

Files are synchronized with the first activation. If you switch boot environments after the first activation, files are not synchronized.


To Activate a Boot Environment and Synchronize Files

x86: Use this procedure to activate a boot environment with the GRUB menu.


Note –

A GRUB menu can facilitate switching from one boot environment to another. A boot environment appears in the GRUB menu after the first activation.


x86: To Activate a Boot Environment With the GRUB Menu

Requirements and Limitations for Activating a Boot Environment

To successfully activate a boot environment, that boot environment must meet the following conditions:

Description 

For More Information 

The boot environment must have a status of “complete.”  

To check status, see Displaying the Status of All Boot Environments

If the boot environment is not the current boot environment, you cannot have mounted the partitions of that boot environment by using the luumount or mount commands.

To view man pages, see lumount(1M) or mount(1M)

The boot environment that you want to activate cannot be involved in a comparison operation.  

For procedures, see Comparing Boot Environments

If you want to reconfigure swap, make this change prior to booting the inactive boot environment. By default, all boot environments share the same swap devices.  

To reconfigure swap, see To Create a Boot Environment and Reconfiguring Swap


x86 only –

If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:

See x86: Activating a Boot Environment With the GRUB Menu.


ProcedureTo Activate a Boot Environment

The following procedure switches a new boot environment to become the currently running boot environment.


x86 only –

If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:

See x86: Activating a Boot Environment With the GRUB Menu.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To activate the boot environment, type:


    # /sbin/luactivate  BE_name
    
    BE_name

    Specifies the name of the boot environment that is to be activated

  3. Reboot.


    # init 6
    

    Caution – Caution –

    Use only the init or shutdown commands to reboot. If you use the reboot, halt, or uadmin commands, the system does not switch boot environments. The last-active boot environment is booted again.



Example 5–14 Activating a Boot Environment

In this example, the second_disk boot environment is activated at the next reboot.


# /sbin/luactivate second_disk
# init 6

ProcedureTo Activate a Boot Environment and Synchronize Files

The first time you boot from a newly created boot environment, Solaris Live Upgrade software synchronizes the new boot environment with the boot environment that was last active. “Synchronize” means that certain critical system files and directories are copied from the last-active boot environment to the boot environment being booted. Solaris Live Upgrade does not perform this synchronization after the initial boot, unless you force synchronization with the luactivate command and the -s option.


x86 only –

When you switch between boot environments with the GRUB menu, files also are not synchronized. You must use the following procedure to synchronize files.


For more information about synchronization, see Synchronizing Files Between Boot Environments.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To activate the boot environment, type:


    # /sbin/luactivate  -s BE_name
    
    -s

    Forces a synchronization of files between the last-active boot environment and the new boot environment. The first time that a boot environment is activated, the files between the boot environment are synchronized With subsequent activations, the files are not synchronized unless you use the -s option.


    Caution – Caution –

    Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 5/09 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 5/09 files might not be compatible with the Solaris 9 files.


    BE_name

    Specifies the name of the boot environment that is to be activated.

  3. Reboot.


    # init 6
    

Example 5–15 Activating a Boot Environment

In this example, the second_disk boot environment is activated at the next reboot and the files are synchronized.


# /sbin/luactivate -s second_disk
# init 6

x86: Activating a Boot Environment With the GRUB Menu

A GRUB menu provides an optional method of switching between boot environments. The GRUB menu is an alternative to activating (booting) with the luactivate command. The table below notes cautions and limitations when using the GRUB menu.

Table 5–3 x86: Activating With the GRUB Menu Summary

Task 

Description 

For More Information 

Caution

After you have activated a boot environment, do not change the disk order in the BIOS. Changing the order might cause the GRUB menu to become invalid. If this problem occurs, changing the disk order back to the original state fixes the GRUB menu. 

 

Activating a boot environment for the first time 

The first time you activate a boot environment, you must use the luactivate command. The next time you boot, that boot environment's name is displayed in the GRUB main menu. You can thereafter switch to this boot environment by selecting the appropriate entry in the GRUB menu.

To Activate a Boot Environment

Synchronizing files 

The first time you activate a boot environment, files are synchronized between the current boot environment and the new boot environment. With subsequent activations, files are not synchronized. When you switch between boot environments with the GRUB menu, files also are not synchronized. You can force a synchronization when using the luactivate command with the -s option.

To Activate a Boot Environment and Synchronize Files

Boot environments created before the Solaris 10 1/06 release

If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command. These older boot environments do not display on the GRUB menu.

To Activate a Boot Environment

Editing or customizing the GRUB menu entries 

The menu.lst file contains the information that is displayed in the GRUB menu. You can revise this file for the following reasons:

  • To add to the GRUB menu entries for operating systems other than the Solaris OS.

  • To customize booting behavior. For example, you could change booting to verbose mode or change the default time that automatically boots the OS.


Note –

If you want to change the GRUB menu, you need to locate the menu.lst file. For step-by-step instructions, see Chapter 14, Managing the Solaris Boot Archives (Tasks), in System Administration Guide: Basic Administration.



Caution – Caution –

Do not use the GRUB menu.lst file to modify Solaris Live Upgrade entries. Modifications could cause Solaris Live Upgrade to fail. Although you can use the menu.lst file to customize booting behavior, the preferred method for customization is to use the eeprom command. If you use the menu.lst file to customize, the Solaris OS entries might be modified during a software upgrade. Changes to the file could be lost.


Procedurex86: To Activate a Boot Environment With the GRUB Menu

You can switch between two boot environments with the GRUB menu. Note the following limitations:

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Reboot the system.


    # init 6
    

    The GRUB main menu is displayed. The two operating systems are listed, Solaris and second_disk, which is a Solaris Live Upgrade boot environment. The failsafe entries are for recovery, if for some reason the primary OS does not boot.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris  failsafe                                                  |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. To activate a boot environment, use the arrow key to select the desired boot environment and press Return.

    The selected boot environment is booted and becomes the active boot environment.

Chapter 6 Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

This chapter explains how to recover from an activation failure.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. The usage for the luactivate command for a ZFS boot environment is the same. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.


If a failure is detected after upgrading or if the application is not compatible with an upgraded component, fall back to the original boot environment by using one of the following procedures, depending on your platform.

SPARC: Falling Back to the Original Boot Environment

You can fallback to the original boot environment by using three methods:

ProcedureSPARC: To Fall Back Despite Successful New Boot Environment Activation

Use this procedure when you have successfully activated your new boot environment, but are unhappy with the results.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /sbin/luactivate BE_name
    
    BE_name

    Specifies the name of the boot environment to be activated

  3. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

ProcedureSPARC: To Fall Back From a Failed Boot Environment Activation

  1. At the OK prompt, boot the machine to single-user state from the Solaris Operating System DVD, Solaris Software - 1 CD, the network, or a local disk.


    OK boot device_name -s
    
    device_name

    Specifies the name of devices from where the system can boot, for example /dev/dsk/c0t0d0s0

  2. Type:


    # /sbin/luactivate BE_name
    
    BE_name

    Specifies the name of the boot environment to be activated

  3. At the prompt, type:


    Do you want to fallback to activate boot environment <disk name> 
    (yes or no)? yes
    

    A message displays that the fallback activation is successful.

  4. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

ProcedureSPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image

Use this procedure to boot from a DVD, CD, a net installation image or another disk that can be booted. You need to mount the root (/) slice from the last-active boot environment. Then run the luactivate command, which makes the switch. When you reboot, the last-active boot environment is up and running again.

  1. At the OK prompt, boot the machine to single-user state from the Solaris Operating System DVD, Solaris Software - 1 CD, the network, or a local disk:


    OK boot cdrom -s 
    

    or


    OK boot net -s
    

    or


    OK boot device_name -s
    
    device_name

    Specifies the name of the disk and the slice where a copy of the operating system resides, for example /dev/dsk/c0t0d0s0

  2. If necessary, check the integrity of the root (/) file system for the fallback boot environment.


    # fsck device_name
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  3. Mount the active boot environment root (/) slice to some directory, such as /mnt:


    # mount device_name /mnt
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  4. From the active boot environment root (/) slice, type:


    # /mnt/sbin/luactivate
    

    luactivate activates the previous working boot environment and indicates the result.

  5. Unmount /mnt


    # umount  /mnt
    
  6. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

x86: Falling Back to the Original Boot Environment

To fall back to the original boot environment, choose the procedure the best fits your circumstances.

Procedurex86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu

Use this procedure when you have successfully activated your new boot environment, but are dissatisfied with the results. You can quickly switch back to the original boot environment by using the GRUB menu.


Note –

The boot environments that are being switched must be GRUB boot environments that were created with GRUB software. If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment is not a GRUB boot environment.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Reboot the system.


    # init 6
    

    The GRUB menu is displayed. The Solaris OS is the original boot environment. The second_disk boot environment was successfully activated and appears on the GRUB menu. The failsafe entries are for recovery if for some reason the primary entry does not boot.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. To boot to the original boot environment, use the arrow key to select the original boot environment and press Return.


Example 6–1 To Fall Back Despite Successful New Boot Environment Activation


# su
# init 6

GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
+-------------------------------------------------------------------+
|Solaris                                                            |
|Solaris  failsafe                                                  |
|second_disk                                                        |
|second_disk failsafe                                               |
+-------------------------------------------------------------------+
Use the ^ and v keys to select which entry is highlighted. Press
enter to boot the selected OS, 'e' to edit the commands before
booting, or 'c' for a command-line.

Select the original boot environment, Solaris.


Procedurex86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu

If you experience a failure while booting, use the following procedure to fall back to the original boot environment. In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.


Caution – Caution –

For the Solaris 10 3/05 release, the recommended action to fall back if the previous boot environment and new boot environment were on different disks included changing the hard disk boot order in the BIOS. Starting with the Solaris 10 1/06 release, changing the BIOS disk order is unnecessary and is strongly discouraged. Changing the BIOS disk order might invalidate the GRUB menu and cause the boot environment to become unbootable. If the BIOS disk order is changed, reverting the order back to the original settings restores system functionality.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. To display the GRUB menu, reboot the system.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to this procedure, x86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu and the DVD or CD.

  4. Boot to single user mode by editing the Grub menu.

    1. To edit the GRUB main menu, type e.

      The GRUB edit menu is displayed.


      root (hd0,2,a)
      kernel /platform/i86pc/multiboot
      module /platform/i86pc/boot_archive
    2. Select the original boot environment's kernel entry by using the arrow keys.

    3. To edit the boot entry, type e.

      The kernel entry is displayed in the GRUB edit menu.


      grub edit>kernel /boot/multiboot
    4. Type -s and press Enter.

      The following example notes the placement of the -s option.


      grub edit>kernel /boot/multiboot -s
      
    5. To begin the booting process in single user mode, type b.

  5. If necessary, check the integrity of the root (/) file system for the fallback boot environment.


    # fsck mount_ point
    
    mount_point

    A root (/) file system that is known and reliable

  6. Mount the original boot environment root slice to some directory (such as /mnt):


    # mount device_name /mnt
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  7. From the active boot environment root slice, type:


    # /mnt/sbin/luactivate
    

    luactivate activates the previous working boot environment and indicates the result.

  8. Unmount /mnt.


    # umount /mnt
    
  9. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

Procedurex86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu and the DVD or CD

If you experience a failure while booting, use the following procedure to fall back to the original boot environment. In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.


Caution – Caution –

For the Solaris 10 3/05 release, the recommended action to fall back if the previous boot environment and new boot environment were on different disks included changing the hard disk boot order in the BIOS. Starting with the Solaris 10 1/06 release, changing the BIOS disk order is unnecessary and is strongly discouraged. Changing the BIOS disk order might invalidate the GRUB menu and cause the boot environment to become unbootable. If the BIOS disk order is changed, reverting the order back to the original settings restores system functionality.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.

  3. Boot from the DVD or CD.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris 10 5/09                                               |
    |Solaris 10 5/09 Serial Console ttya                           |
    |Solaris 10 5/09 Serial Console ttyb (for lx50, v60x and v65x  |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  4. Wait for the default option to boot or choose any option displayed.

    The installation screen is displayed.


    +-------------------------------------------------------------------+
    
    |Select the type of installation you want to perform:                |
    |                                                                    |
    |         1 Solaris Interactive                                      |
    |         2 Custom JumpStart                                         |
    |         3 Solaris Interactive Text (Desktop session)               |
    |         4 Solaris Interactive Text (Console session)               |
    |         5 Apply driver updates                                     |
    |         6 Single user shell                                        |
    |                                                                    |
    |        Enter the number of your choice followed by the <ENTER> key.|
    |        Alternatively, enter custom boot arguments directly.        |
    |
    |         If you wait 30 seconds without typing anything,            |
    |         an interactive installation will be started.               |
    +----------------------------------------------------------------- --+
  5. Choose the “Single user shell” option.

    The following message is displayed.


    Do you wish to automatically update the boot archive? y /n
  6. Type: n


    Starting shell...
    #

    You are now in single user mode.

  7. If necessary, check the integrity of the root (/) file system for the fallback boot environment.


    # fsck mount_ point
    
    mount_point

    A root (/) file system that is known and reliable

  8. Mount the original boot environment root slice to some directory (such as /mnt):


    # mount device_name /mnt
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  9. From the active boot environment root slice, type:


    # /mnt/sbin/luactivate
    Do you want to fallback to activate boot environment c0t4d0s0
    (yes or no)? yes
    

    luactivate activates the previous working boot environment and indicates the result.

  10. Unmount /mnt.


    # umount device_name
    
    device_name

    Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.

  11. Reboot.


    # init 6
    

    The previous working boot environment becomes the active boot environment.

Chapter 7 Maintaining Solaris Live Upgrade Boot Environments (Tasks)

This chapter explains various maintenance tasks such as keeping a boot environment file system up to date or deleting a boot environment. This chapter contains the following sections:


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. The usage for the maintenance for a ZFS boot environment is the same. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.


Overview of Solaris Live Upgrade Maintenance

Table 7–1 Overview of Solaris Live Upgrade Maintenance

Task  

Description 

For Instructions 

(Optional) View Status. 

  • View whether a boot environment is active, being activated, scheduled to be activated, or in the midst of a comparison.

 
  • Compare the active and inactive boot environments.

 
  • Display the name of the active boot environment.

 
  • View the configurations of a boot environment.

(Optional) Update an inactive boot environment. 

Copy file systems from the active boot environment again without changing the configuration of file systems. 

Updating a Previously Configured Boot Environment

(Optional) Other tasks. 

  • Delete a boot environment.

 
  • Change the name of a boot environment.

 
  • Add or change a description that is associated with a boot environment name.

 
  • Cancel scheduled jobs.

Displaying the Status of All Boot Environments

Use the lustatus command to display the information about the boot environment. If no boot environment is specified, the status information for all boot environments on the system is displayed.

The following details for each boot environment are displayed:

ProcedureTo Display the Status of All Boot Environments

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lustatus BE_name
    
    BE_name

    Specifies the name of the inactive boot environment to view status. If BE_name is omitted, lustatus displays status for all boot environments in the system.

    In this example, the status for all boot environments is displayed.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    disk_a_S9           yes       yes     yes        no       -    
    disk_b_S10database   yes       no      no         yes      COPYING  
    disk_b_S9a          no        no      no         yes      - 

    Note –

    You could not perform copy, rename, or upgrade operations on disk_b_S9a because it is not complete, nor on disk_b_S10database because a live upgrade operation is in progress.


Updating a Previously Configured Boot Environment

You can update the contents of a previously configured boot environment with the Copy menu or the lumake command. File Systems from the active (source) boot environment are copied to the target boot environment. The data on the target is also destroyed. A boot environment must have the status “complete” before you can copy from it. See Displaying the Status of All Boot Environments to determine a boot environment's status.

The copy job can be scheduled for a later time, and only one job can be scheduled at a time. To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.

ProcedureTo Update a Previously Configured Boot Environment

This procedure copies source files over outdated files on a boot environment that was previously created.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lumake -n  BE_name [-s source_BE] [-t  time] [-m email_address]
    
    -n BE_name

    Specifies the name of the boot environment that has file systems that are to be replaced.

    -s source_BE

    (Optional) Specifies the name of the source boot environment that contains the file systems to be copied to the target boot environment. If you omit this option, lumake uses the current boot environment as the source.

    -t time

    (Optional) Set up a batch job to copy over file systems on a specified boot environment at a specified time. The time is given in the format that is specified by the man page, at(1).

    -m email_address

    (Optional) Enables you to send an email of the lumake output to a specified address on command completion. email_address is not checked. You can use this option only in conjunction with -t.


Example 7–1 Updating a Previously Configured Boot Environment

In this example, file systems from first_disk are copied to second_disk. When the job is completed, an email is sent to Joe at anywhere.com.


# lumake -n  second_disk -s first_disk -m joe@anywhere.com

The files on first_disk are copied to second_disk and email is sent for notification. To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.


Canceling a Scheduled Create, Upgrade, or Copy Job

A boot environment's scheduled creation, upgrade, or copy job can be canceled just prior to the time the job starts. The job can be scheduled by the lumake command. At any time, only one job can be scheduled on a system.

ProcedureTo Cancel a Scheduled Create, Upgrade, or Copy Job

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lucancel
    

    The job no longer executes at the time that is specified.

Comparing Boot Environments

Use the lucompare command to check for differences between the active boot environment and other boot environments. To make a comparison, the inactive boot environment must be in a complete state and cannot have a copy job that is pending. See Displaying the Status of All Boot Environments.

The lucompare command generates a comparison of boot environments that includes the contents of any non-global zones.

The specified boot environment cannot have any partitions that are mounted with lumount or mount.

ProcedureTo Compare Boot Environments

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/lucompare -i  infile (or) -t -o  outfile BE_name
    
    -i  infile

    Compare files that are listed in infile. The files to be compared should have absolute file names. If the entry in the file is a directory, then comparison is recursive to the directory. Use either this option or -t, not both.

    -t

    Compare only nonbinary files. This comparison uses the file(1) command on each file to determine if the file is a text file. Use either this option or -i, not both.

    -o  outfile

    Redirect the output of differences to outfile.

    BE_name

    Specifies the name of the boot environment that is compared to the active boot environment.


Example 7–2 Comparing Boot Environments

In this example, first_disk boot environment (source) is compared to second_disk boot environment and the results are sent to a file.


# /usr/sbin/lucompare -i  /etc/lu/compare/ \
-o /var/tmp/compare.out second_disk

Deleting an Inactive Boot Environment

Use the ludelete command to remove a boot environment. Note the following limitations.

ProcedureTo Delete an Inactive Boot Environment

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # ludelete BE_name
    
    BE_name

    Specifies the name of the inactive boot environment that is to be deleted


Example 7–3 Deleting an Inactive Boot Environment

In this example, the boot environment, second_disk, is deleted.


# ludelete second_disk

Displaying the Name of the Active Boot Environment

Use the lucurr command to display the name of the currently running boot environment. If no boot environments are configured on the system, the message “No Boot Environments are defined” is displayed. Note that lucurr reports only the name of the current boot environment, not the boot environment that is active on the next reboot. See Displaying the Status of All Boot Environments to determine a boot environment's status.

ProcedureTo Display the Name of the Active Boot Environment

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/lucurr
    

Example 7–4 Displaying the Name of the Active Boot Environment

In this example, the name of the current boot environment is displayed.


# /usr/sbin/lucurr
solaris10

Changing the Name of a Boot Environment

Renaming a boot environment is often useful when you upgrade the boot environment from one Solaris release to another release. For example, following an operating system upgrade, you might rename the boot environment solaris8 to solaris10.

Use the lurename command to change the inactive boot environment's name.


x86 only –

Starting with the Solaris 10 1/06 release, the GRUB menu is automatically updated when you use the Rename menu or lurename command. The updated GRUB menu displays the boot environment's name in the list of boot entries. For more information about the GRUB menu, see Booting Multiple Boot Environments.

To determine the location of the GRUB menu's menu.lst file, see Chapter 14, Managing the Solaris Boot Archives (Tasks), in System Administration Guide: Basic Administration.


Table 7–2 Limitations for Naming a Boot Environment

Limitation 

For Instructions 

The name must not exceed 30 characters in length. 

 

The name can consist only of alphanumeric characters and other ASCII characters that are not special to the UNIX shell. 

See the “Quoting” section of sh(1).

The name can contain only single-byte, 8-bit characters. 

 

The name must be unique on the system. 

 

A boot environment must have the status “complete” before you rename it.  

See Displaying the Status of All Boot Environments to determine a boot environment's status.

You cannot rename a boot environment that has file systems mounted with lumount or mount.

 

ProcedureTo Change the Name of an Inactive Boot Environment

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lurename -e  BE_name -n  new_name
    
    -e BE_name

    Specifies the inactive boot environment name to be changed

    -n new_name

    Specifies the new name of the inactive boot environment

    In this example, second_disk is renamed to third_disk.


    # lurename -e  second_disk  -n  third_disk
    

Adding or Changing a Description Associated With a Boot Environment Name

You can associate a description with a boot environment name. The description never replaces the name. Although a boot environment name is restricted in length and characters, the description can be of any length and of any content. The description can be simple text or as complex as a gif file. You can create this description at these times:

For more information about using the -A option with lucreate

To Create a Boot Environment for the First Time

For more information about creating the description after the boot environment has been created 

ludesc(1M)

ProcedureTo Add or Change a Description for a Boot Environment Name With Text

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -n  BE_name 'BE_description'
    
    -n BE_name 'BE_description'

    Specifies the boot environment name and the new description to be associated with the name


Example 7–5 Adding a Description to a Boot Environment Name With Text

In this example, a boot environment description is added to a boot environment that is named second_disk. The description is text that is enclosed in single quotes.


# /usr/sbin/ludesc -n second_disk 'Solaris 10 5/09 test build'

ProcedureTo Add or Change a Description for a Boot Environment Name With a File

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -n BE_name -f file_name
    
    -n BE_name

    Specifies the boot environment name

    file_name

    Specifies the file to be associated with a boot environment name


Example 7–6 Adding a Description to a Boot Environment Name With a File

In this example, a boot environment description is added to a boot environment that is named second_disk. The description is contained in a gif file.


# /usr/sbin/ludesc -n second_disk -f rose.gif

ProcedureTo Determine a Boot Environment Name From a Text Description

The following command returns the name of the boot environment associated with the specified description.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -A 'BE_description'
    
    -A 'BE_description'

    Specifies the description to be associated with the boot environment name.


Example 7–7 Determining a Boot Environment Name From a Description

In this example, the name of the boot environment, second_disk, is determined by using the -A option with the description.


# /usr/sbin/ludesc -A  'Solaris 10 5/09 test build'
 second_disk

ProcedureTo Determine a Boot Environment Name From a Description in a File

The following command displays the boot environment's name that is associated with a file. The file contains the description of the boot environment.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -f  file_name
    
    -f file_name

    Specifies the name of the file that contains the description of the boot environment.


Example 7–8 Determining a Boot Environment Name From a Description in a File

In this example, the name of the boot environment, second_disk, is determined by using the -f option and the name of the file that contains the description.


# /usr/sbin/ludesc -f rose.gif
second_disk

ProcedureTo Determine a Boot Environment Description From a Name

This procedure displays the description of the boot environment that is named in the command.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # /usr/sbin/ludesc -n BE_name
    
    -n BE_name

    Specifies the boot environment name.


Example 7–9 Determining a Boot Environment Description From a Name

In this example, the description is determined by using the -n option with the boot environment name.


# /usr/sbin/ludesc -n  second_disk 
Solaris 10 5/09 test build

Viewing the Configuration of a Boot Environment

Use the lufslist command to list the configuration of a boot environment. The output contains the disk slice (file system), file system type, and file system size for each boot environment mount point.

ProcedureTo View the Configuration of a Boot Environment

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Type:


    # lufslist -n BE_name
    
    BE_name

    Specifies the name of the boot environment to view file system specifics

    The following example displays a list.


    Filesystem                fstype       size(Mb) Mounted on
    ------------------------------------------------------------------
    /dev/dsk/c0t0d0s1         swap           512.11 -
    /dev/dsk/c0t4d0s3         ufs           3738.29 /
    /dev/dsk/c0t4d0s4         ufs            510.24 /opt

    Note –

    For an example of a list that contains non-global zones, see To View the Configuration of a Boot Environment's Non-Global Zone File Systems.


Chapter 8 Upgrading the Solaris OS on a System With Non-Global Zones Installed

This chapter describes using Solaris Live Upgrade to upgrade a system that has non-global zones installed.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For procedures for migrating a UFS file system with non-global zones to a ZFS root pool, see Chapter 14, Solaris Live Upgrade For ZFS With Non-Global Zones Installed.


This chapter contains the following sections:

Upgrading With Solaris Live Upgrade and Installed Non-Global Zones (Overview)

Starting with the Solaris Solaris 10 8/07 release, you can upgrade or patch a system that contains non-global zones with Solaris Live Upgrade. If you have a system that contains non-global zones, Solaris Live Upgrade is the recommended program to upgrade and to add patches. Other upgrade programs might require extensive upgrade time, because the time required to complete the upgrade increases linearly with the number of installed non-global zones. If you are patching a system with Solaris Live Upgrade, you do not have to take the system to single-user mode and you can maximize your system's uptime. The following list summarizes changes to accommodate systems that have non-global zones installed.

Understanding Solaris Zones and Solaris Live Upgrade

The Solaris Zones partitioning technology is used to virtualize operating system services and provide an isolated and secure environment for running applications. A non-global zone is a virtualized operating system environment created within a single instance of the Solaris OS, the global zone. When you create a non-global zone, you produce an application execution environment in which processes are isolated from the rest of the system.

Solaris Live Upgrade is a mechanism to copy the currently running system onto new slices. When non-global zones are installed, they can be copied to the inactive boot environment along with the global zone's file systems.

Figure 8–1 shows a non-global zone that is copied to the inactive boot environment along with the global zone's file system.

Figure 8–1 Creating a Boot Environment – Copying Non-Global Zones

The context describes the illustration.

Figure 8–2 shows that a non-global zone is copied to the inactive boot environment.

Figure 8–2 Creating a Boot Environment – Copying a Shared File System From a Non-Global Zone

The context describes the illustration.

Guidelines for Using Solaris Live Upgrade With Non-Global Zones (Planning)

Planning for using non-global zones includes the limitations described below.

Table 8–1 Limitations When Upgrading With Non-Global Zones

Problem 

Description 

Consider these issues when using Solaris Live Upgrade on a system with zones installed. It is critical to avoid zone state transitions during lucreate and lumount operations.

  • When you use the lucreate command to create an inactive boot environment, if a given non-global zone is not running, then the zone cannot be booted until the lucreate operation has completed.

  • When you use the lucreate command to create an inactive boot environment if a given non-global zone is running, the zone should not be halted or rebooted until the lucreate operation has completed.

  • When an inactive boot environment is mounted with the lumount command, you cannot boot non-global zones or reboot them, although zones that were running before the lumount operation can continue to run.

  • Because a non-global zone can be controlled by a non-global zone administrator as well as by the global zone administrator, to prevent any interaction, halt all zones during lucreate or lumount operations.

Problems can occur when the global zone administrator does not notify the non-global zone administrator of an upgrade with Solaris Live Upgrade. 

When Solaris Live Upgrade operations are underway, non-global zone administrator involvement is critical. The upgrade affects the work of the administrators, who will be addressing the changes that occur as a result of the upgrade. Zone administrators should ensure that any local packages are stable throughout the sequence, handle any post-upgrade tasks such as configuration file adjustments, and generally schedule around the system outage.  

For example, if a non-global zone administrator adds a package while the global zone administrator is copying the file systems with the lucreate command, the new package is not copied with the file systems and the non-global zone administrator is unaware of the problem.

Creating a Boot Environment When a Non-Global Zone Is on a Separate File System

Creating a new boot environment from the currently running boot environment remains the same as in previous releases with one exception. You can specify a destination disk slice for a shared file system within a non-global zone. This exception occurs under the following conditions:

To prevent this separate file system from being shared in the new boot environment, the lucreate command enables specifying a destination slice for a separate file system for a non-global zone. The argument to the -m option has a new optional field, zonename. This new field places the non-global zone's separate file system on a separate slice in the new boot environment. For more information about setting up a non-global zone with a separate file system, see zonecfg(1M).


Note –

By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. Updating shared files in the active boot environment also updates data in the inactive boot environment. For example, the /export file system is a shared file system. If you use the -m option and the zonename option, the non-global zone's file system is copied to a separate slice and data is not shared. This option prevents non-global zone file systems that were created with the zonecfg add fs command from being shared between the boot environments.


Creating and Upgrading a Boot Environment When Non-Global Zones Are Installed (Tasks)

The following sections provide step-by-step procedures for upgrading when non-global zones are installed.

ProcedureUpgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks)

The following procedure provides detailed instructions for upgrading with Solaris Live Upgrade for a system with non-global zones installed.

  1. Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve info doc 206844. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the SunSolve info doc 206844.

    1. Become superuser or assume an equivalent role.

    2. From the SunSolve web site, follow the instructions in info doc 206844 to remove and add Solaris Live Upgrade packages.

      The following instructions summarizes the info doc steps for removing and adding the packages.

      • Remove existing Solaris Live Upgrade packages.

        The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


        # pkgrm SUNWlucfg SUNWluu SUNWlur
        
      • Install the new Solaris Live Upgrade packages.

        You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve info doc for more information.

        • If you are using the Solaris Operating System DVD, change directories and run the installer:

          • Change directories.


            # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
            

            Note –

            For SPARC based systems, the path to the installer is different for releases previous to the Solaris 10 10/08 release:


            # cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers
            

          • Run the installer


            # ./liveupgrade20
            

            The Solaris installation program GUI is displayed. If you are using a script, you can prevent the GUI from displaying by using the -noconsole and -nodisplay options.

        • If you are using the Solaris Software – 2 CD, you can run the installer without changing the path.


          % ./installer
          
        • Verify that the packages have been installed successfully.


          # pkgchk -v SUNWlucfg SUNWlur SUNWluu
          
    3. If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches.

    4. From the SunSolve web site, obtain the list of patches.

    5. Change to the patch directory as in this example.


      # cd /var/tmp/lupatches
      
    6. Install the patches.


      # patchadd -M  path-to-patchespatch-id  patch-id
      

      path-to-patches is the path to the patches directory, such as /var/tmp/lupatches. patch-id is the patch number or numbers. Separate multiple patch names with a space.


      Note –

      The patches need to be applied in the order specified in infodoc 206844.


    7. Reboot the system if necessary. Certain patches require a reboot to be effective.

      x86 only: Rebooting the system is required. Otherwise, Solaris Live Upgrade fails.


      # init 6
      

      You now have the packages and patches necessary for a successful creation of a new boot environment.

  2. Create the new boot environment.


    # lucreate [-A 'BE_description'] [-c BE_name] \
     -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...] -n BE_name
    
    -n BE_name

    The name of the boot environment to be created. BE_name must be unique on the system.

    -A 'BE_description'

    (Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.

    -c BE_name

    Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.

    -m mountpoint:device[,metadevice]:fs_options[:zonename] [-m ...]

    Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.

    • mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.

    • device field can be one of the following:

      • The name of a disk device, of the form /dev/dsk/cwtxdysz

      • The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum

      • The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum

      • The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent

    • fs_options field can be one of the following:

      • ufs, which indicates a UFS file system.

      • vxfs, which indicates a Veritas file system.

      • swap, which indicates a swap volume. The swap mount point must be a – (hyphen).

      • For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors).

    • zonename specifies that a non-global zone's separate file system be placed on a separate slice. This option is used when the zone's separate file system is in a shared file system such as /zone1/root/export. This option copies the zone's separate file system to a new slice and prevents this file system from being shared. The separate file system was created with the zonecfg add fs command.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. The non-global zone named zone1 is given a separate mount point on c0t1d0s1.


    Note –

    By default, any file system other than the critical file systems (root (/), /usr, and /opt file systems) is shared between the current and new boot environments. The /export file system is a shared file system. If you use the -m option, the non-global zone's file system is placed on a separate slice and data is not shared. This option prevents zone file systems that were created with the zonecfg add fs command from being shared between the boot environments. See zonecfg(1M) for details.



    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
    
  3. Upgrade the boot environment.

    The operating system image to be used for the upgrade is taken from the network.


    # luupgrade -u -n BE_name -s os_image_path
    
    -u

    Upgrades an operating system image on a boot environment

    -n BE_name

    Specifies the name of the boot environment that is to be upgraded

    -s os_image_path

    Specifies the path name of a directory that contains an operating system image

    In this example, the new boot environment, newbe, is upgraded from a network installation image.


    # luupgrade -n newbe -u -s /net/server/export/Solaris_10/combined.solaris_wos
    
  4. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no      -
    newbe               yes       no       no       yes     -
  5. Activate the new boot environment.


    # luactivate BE_name
    

    BE_name specifies the name of the boot environment that is to be activated.


    Note –

    For an x86 based system, the luactivate command is required when booting a boot environment for the first time. Subsequent activations can be made by selecting the boot environment from the GRUB menu. For step-by-step instructions, see x86: Activating a Boot Environment With the GRUB Menu.


    To successfully activate a boot environment, that boot environment must meet several conditions. For more information, see Activating a Boot Environment.

  6. Reboot.


    # init 6
    

    Caution – Caution –

    Use only the init or shutdown commands to reboot. If you use the reboot, halt, or uadmin commands, the system does not switch boot environments. The most recently active boot environment is booted again.


    The boot environments have switched and the new boot environment is now the current boot environment.

  7. (Optional) Fall back to a different boot environment.

    If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Upgrading a System With Non-Global Zones Installed (Example)

The following procedure provides an example with abbreviated instructions for upgrading with Solaris Live Upgrade.

For detailed explanations of steps, see Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System (Tasks).

Upgrading With Solaris Live Upgrade When Non-Global Zones Are Installed on a System

The following example provides abbreviated descriptions of the steps to upgrade a system with non-global zones installed. In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 10 release. This system has non-global zones installed and has a non-global zone with a separate file system on a shared file system, zone1/root/export. The new boot environment is upgraded to the Solaris 10 5/09 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command.


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Install required patches.

    Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 206844 (formerly 72099) on the SunSolve web site. In this example, /net/server/export/patches is the path to the patches.


    # patchadd /net/server/export/patches
    # init 6
    
  2. Remove the Solaris Live Upgrade packages from the current boot environment.


    # pkgrm SUNWlucfg SUNWluu SUNWlur
    
  3. Insert the Solaris DVD or CD. Then install the replacement Solaris Live upgrade packages from the target release.


    # pkgadd -d /cdrom/cdrom0/Solaris_10/Product SUNWlucfg SUNWlur SUNWluu
    
  4. Create a boot environment.

    In the following example, a new boot environment named newbe is created. The root (/) file system is placed on c0t1d0s4. All non-global zones in the current boot environment are copied to the new boot environment. A separate file system was created with the zonecfg add fs command for zone1. This separate file system /zone/root/export is placed on a separate file system, c0t1d0s1. This option prevents the separate file system from being shared between the current boot environment and the new boot environment.


    # lucreate -n newbe -m /:/dev/dsk/c0t1d0s4:ufs -m /export:/dev/dsk/c0t1d0s1:ufs:zone1
    
  5. Upgrade the new boot environment.

    In this example, /net/server/export/Solaris_10/combined.solaris_wos is the path to the network installation image.


    # luupgrade -n newbe -u -s  /net/server/export/Solaris_10/combined.solaris_wos
    
  6. (Optional) Verify that the boot environment is bootable.

    The lustatus command reports if the boot environment creation is complete.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy
    Name               Complete  Now	 OnReboot   Delete	 Status
    ------------------------------------------------------------------------
    c0t1d0s0            yes      yes      yes       no           -
    newbe               yes       no       no       yes          -
  7. Activate the new boot environment.


    # luactivate newbe
    # init 6
    

    The boot environment newbe is now active.

  8. (Optional) Fall back to a different boot environment. If the new boot environment is not viable or you want to switch to another boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Administering Boot Environments That Contain Non-Global Zones

The following sections provide information about administering boot environments that contain non-global zones.

ProcedureTo View the Configuration of a Boot Environment's Non-Global Zone File Systems

Use this procedure to display a list of file systems for both the global zone and the non-global zones.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Display the list of file systems.


    # lufslist -n BE_name
    
    BE_name

    Specifies the name of the boot environment to view file system specifics


Example 8–1 List File Systems With Non-Global Zones

The following example displays a list of file systems that include non-global zones.


# lufslist -n s3
boot environment name: s3
This boot environent is currently active.
This boot environment will be active on next system boot.

Filesystem              fstype    device size Mounted on Mount Options
------------------------------------------------------------------
/dev/dsk/c0t0d0s1         swap     2151776256   -        -
/dev/dsk/c0t0d0s3         ufs     10738040832   /        -
/dev/dsk/c0t0d0s7         ufs     10487955456   /export  -
                zone <zone1> within boot environment <s3>
/dev/dsk/c0t0d0s5         ufs      5116329984   /export  -

ProcedureTo Compare Boot Environments for a System With Non-Global Zones Installed

The lucompare command now generates a comparison of boot environments that includes the contents of any non-global zone.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Compare the current and new boot environments.


    # /usr/sbin/lucompare -i  infile (or) -t -o  outfile BE_name
    
    -i  infile

    Compare files that are listed in infile. The files to be compared should have absolute file names. If the entry in the file is a directory, the comparison is recursive to the directory. Use either this option or -t, not both.

    -t

    Compare only nonbinary files. This comparison uses the file(1) command on each file to determine if the file is a text file. Use either this option or -i, not both.

    -o  outfile

    Redirect the output of differences to outfile.

    BE_name

    Specifies the name of the boot environment that is compared to the active boot environment.


Example 8–2 Comparing Boot Environments

In this example, current boot environment (source) is compared to second_disk boot environment and the results are sent to a file.


# /usr/sbin/lucompare -i  /etc/lu/compare/ -o /var/tmp/compare.out second_disk

Using the lumount Command on a System That Contains Non-Global Zones

The lumount command provides non-global zones with access to their corresponding file systems that exist on inactive boot environments. When the global zone administrator uses the lumount command to mount an inactive boot environment, the boot environment is mounted for non-global zones as well.

In the following example, the appropriate file systems are mounted for the boot environment, newbe, on /mnt in the global zone. For non-global zones that are running, mounted, or ready, their corresponding file systems within newbe are also made available on /mnt within each zone.


# lumount -n newbe /mnt

For more information about mounting, see the lumount(1M) man page.

Chapter 9 Solaris Live Upgrade (Examples)

This chapter provides examples of creating a boot environment, then upgrading and activating the new boot environment which then becomes the currently running system.


Note –

This chapter describes Solaris Live Upgrade for UFS file systems. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.


This chapter contains the following sections:

Example of Upgrading With Solaris Live Upgrade

In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 5/09 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command. An example of falling back to the original boot environment is also given.

Prepare to Use Solaris Live Upgrade

Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve info doc 206844. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

The following steps describe the steps in the SunSolve info doc 206844.


Note –

This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.


  1. Become superuser or assume an equivalent role.

  2. From the SunSolve web site, follow the instructions in info doc 206844 to remove and add Solaris Live Upgrade packages.

    1. Remove existing Solaris Live Upgrade packages.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    2. Install the new Solaris Live Upgrade packages.

      You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD or by using the pkgadd command. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve info doc for more information.

      • If you are using the Solaris Operating System DVD, change directories and run the installer:

        • Change directories.


          # cd /cdrom/cdrom0/Solaris_10/Tools/Installers
          

          Note –

          For SPARC based systems, the path to the installer is different for releases previous to the Solaris 10 10/08 release:


          # cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers
          

        • Run the installer


          # ./liveupgrade20  -noconsole - nodisplay
          

          The -noconsole and -nodisplay options prevent the character user interface (CUI) from displaying.


          Note –

          The Solaris Live Upgrade CUI is no longer supported.


      • If you are using the Solaris Software – 2 CD, you can run the installer without changing the path.


        % ./installer
        
      • Verify that the packages have been installed successfully.


        # pkgchk -v SUNWlucfg SUNWlur SUNWluu
        
  3. Install the patches listed in info doc 206844.

    1. If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches.

    2. From the SunSolve web site, obtain the list of patches.

    3. Change to the patch directory as in this example.


      # cd /var/tmp/lupatches
      
    4. Install the patches.


      # patchadd -M  path-to-patchespatch-id  patch-id
      

      path-to-patches is the patch to the patch directory such as /var/tmp/lupatches. patch-id is the patch number or numbers. Separate multiple patch names with a space.


      Note –

      The patches need to be applied in the order specified in infodoc 206844.


    5. Reboot the system if necessary. Certain patches require a reboot to be effective.

      x86 only: Rebooting the system is required. Otherwise, Solaris Live Upgrade fails.


      # init 6
      

      You now have the packages and patches necessary for a successful creation of a new boot environment.

To Create a Boot Environment

The source boot environment is named c0t4d0s0 by using the -c option. Naming the source boot environment is required only when the first boot environment is created. For more information about naming using the -c option, see the description in “To Create a Boot Environment for the First Time” Step 2.

The new boot environment is named c0t15d0s0. The -A option creates a description that is associated with the boot environment name.

The root (/) file system is copied to the new boot environment. Also, a new swap slice is created rather than sharing the source boot environment's swap slice.


# lucreate -A 'BE_description' -c /dev/dsk/c0t4d0s0 -m /:/dev/dsk/c0t15d0s0:ufs\
-m -:/dev/dsk/c0t15d0s1:swap -n /dev/dsk/c0t15d0s0

To Upgrade the Inactive Boot Environment

The inactive boot environment is named c0t15d0s0. The operating system image to be used for the upgrade is taken from the network.


# luupgrade -n c0t15d0s0 -u -s /net/ins-svr/export/Solaris_10 \ 
combined.solaris_wos

To Check if Boot Environment Is Bootable

The lustatus command reports if the boot environment creation is complete. lustatus also shows if the boot environment is bootable.


# lustatus
boot environment   Is        Active  Active     Can	    Copy
Name               Complete  Now	 OnReboot   Delete	 Status
------------------------------------------------------------------------
c0t4d0s0           yes       yes      yes      no      -
c0t15d0s0          yes       no       no       yes     -

To Activate the Inactive Boot Environment

The c0t15d0s0 boot environment is made bootable with the luactivate command. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.


# luactivate c0t15d0s0
# init 6

(Optional) To Fall Back to the Source Boot Environment

The following procedures for falling back depend on your new boot environment activation situation:


Example 9–1 SPARC: To Fall Back Despite Successful Boot Environment Creation

In this example, the original c0t4d0s0 boot environment is reinstated as the active boot environment although it was activated successfully. The device name is first_disk.


# /sbin/luactivate first_disk 
# init 6


Example 9–2 SPARC: To Fall Back From a Failed Boot Environment Activation

In this example, the new boot environment was not bootable. You must return to the OK prompt before booting from the original boot environment, c0t4d0s0, in single-user mode.


OK boot net -s
# /sbin/luactivate first_disk
Do you want to fallback to activate boot environment c0t4d0s0 
(yes or no)? yes
# init 6

The original boot environment, c0t4d0s0, becomes the active boot environment.



Example 9–3 SPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image

In this example, the new boot environment was not bootable. You cannot boot from the original boot environment and must use media or a net installation image. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.


OK boot net -s
# fsck /dev/dsk/c0t4d0s0
# mount /dev/dsk/c0t4d0s0 /mnt 
# /mnt/sbin/luactivate
Do you want to fallback to activate boot environment c0t4d0s0 
(yes or no)? yes
# umount /mnt 
# init 6


Example 9–4 x86: To Fall Back to the Original Boot Environment By Using the GRUB Menu

Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the GRUB menu.

In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. To enable a fallback, the original boot environment is booted in single-user mode.

  1. Become superuser or assume an equivalent role.

  2. To display the GRUB menu, reboot the system.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris                                                            |
    |Solaris failsafe                                                   |
    |second_disk                                                        |
    |second_disk failsafe                                               |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  3. From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to Example 9–5.

  4. Edit the GRUB menu by typing: e.

  5. Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.


    grub edit>kernel /boot/multiboot
  6. Boot to single user mode, by typing -s.


    grub edit>kernel /boot/multiboot -s
    
  7. Boot and mount the boot environment. Then activate it.


# b
# fsck /dev/dsk/c0t4d0s0
# mount /dev/dsk/c0t4d0s0 /mnt 
# /mnt/sbin/luactivate
Do you want to fallback to activate boot environment c0t4d0s0
(yes or no)? yes
# umount /mnt
# init 6


Example 9–5 x86: To Fall Back to the Original Boot Environment With the GRUB Menu by Using the DVD or CD

Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the DVD or CD.

In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. To enable a fallback, the original boot environment is booted in single-user mode.

  1. Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.

  2. Become superuser or assume an equivalent role.

  3. Boot from the DVD or CD.


    # init 6
    

    The GRUB menu is displayed.


    GNU GRUB version 0.95 (616K lower / 4127168K upper memory)
    +-------------------------------------------------------------------+
    |Solaris 10 5/09                                                    |
    |Solaris 10 5/09 Serial Console ttya                                |
    |Solaris 10 5/09 Serial Console ttyb (for lx50, v60x and v65x       |
    +-------------------------------------------------------------------+
    Use the ^ and v keys to select which entry is highlighted. Press
    enter to boot the selected OS, 'e' to edit the commands before
    booting, or 'c' for a command-line.
  4. Wait for the default option to boot or choose any option displayed.

    The installation screen is displayed.


    +-------------------------------------------------------------------+
    
    |Select the type of installation you want to perform:                |
    |                                                                    |
    |         1 Solaris Interactive                                      |
    |         2 Custom JumpStart                                         |
    |         3 Solaris Interactive Text (Desktop session)               |
    |         4 Solaris Interactive Text (Console session)               |
    |         5 Apply driver updates                                     |
    |         6 Single user shell                                        |
    |                                                                    |
    |        Enter the number of your choice followed by the <ENTER> key.|
    |        Alternatively, enter custom boot arguments directly.        |
    |
    |         If you wait 30 seconds without typing anything,            |
    |         an interactive installation will be started.               |
    +----------------------------------------------------------------- --+
  5. Choose the “Single user shell” option.

    The following message is displayed.


    Do you wish to automatically update the boot archive? y /n
  6. Type: n


    Starting shell...
    #

    You are now in single user mode.

  7. Mount the boot environment. Then activate and reboot.


    # fsck /dev/dsk/c0t4d0s0
    # mount /dev/dsk/c0t4d0s0 /mnt 
    # /mnt/sbin/luactivate
    Do you want to fallback to activate boot environment c0t4d0s0
    (yes or no)? yes
    # umount /mnt
    # init 6
    

Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror)

This example shows you how to do the following tasks:

Figure 9–1 shows the current boot environment, which contains three physical disks.

Figure 9–1 Detaching and Upgrading One Side of a RAID-1 Volume (Mirror)

The context describes the illustration.

  1. Create a new boot environment, second_disk, that contains a mirror.

    The following command performs these tasks.

    • lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system, which is copied to the mirror d10. All data on the mirror d10 is overwritten.

    • Two slices, c0t1d0s0 and c0t2d0s0, are specified to be used as submirrors. These two submirrors are attached to mirror d10.


    # lucreate -c first_disk -n second_disk \ 
    -m /:/dev/md/dsk/d10:ufs,mirror \ 
    -m /:/dev/dsk/c0t1d0s0:attach \ 
    -m /:/dev/dsk/c0t2d0s0:attach
    
  2. Activate the second_disk boot environment.


    # /sbin/luactivate second_disk
    # init 6
    
  3. Create another boot environment, third_disk.

    The following command performs these tasks.

    • lucreate configures a UFS file system for the mount point root (/). A mirror, d20, is created.

    • Slice c0t1d0s0 is removed from its current mirror and is added to mirror d20. The contents of the submirror, the root (/) file system, are preserved and no copy occurs.


    # lucreate -n third_disk \ 
    -m /:/dev/md/dsk/d20:ufs,mirror \ 
    -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve
    
  4. Upgrade the new boot environment, third_disk


    # luupgrade -u -n third_disk \ 
    -s /net/installmachine/export/Solaris_10/OS_image
    
  5. Add a patch to the upgraded boot environment.


    # luupgrade -t n third_disk -s /net/patches 222222-01
    
  6. Activate the third_disk boot environment to make this boot environment the currently running system.


    # /sbin/luactivate third_disk
    # init 6
    
  7. Delete the boot environment second_disk.


    # ludelete second_disk
    
  8. The following commands perform these tasks.

    • Clear mirror d10.

    • Check for the number for the concatenation of c0t2d0s0.

    • Attach the concatenation that is found by the metastat command to the mirror d20. The metattach command synchronizes the newly attached concatenation with the concatenation in mirror d20. All data on the concatenation is overwritten.


    # metaclear d10 
    # metastat -p | grep c0t2d0s0
    dnum 1 1 c0t2d0s0
    # metattach d20 dnum
    
    num

    Is the number found in the metastat command for the concatenation

The new boot environment, third_disk, has been upgraded and is the currently running system. third_disk contains the root (/) file system that is mirrored.

Figure 9–2 shows the entire process of detaching a mirror and upgrading the mirror by using the commands in the preceding example.

Figure 9–2 Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (continued)

The context describes the illustration.

Example of Migrating From an Existing Volume to a Solaris Volume Manager RAID-1 Volume

Solaris Live Upgrade enables the creation of a new boot environment on RAID–1 volumes (mirrors). The current boot environment's file systems can be on any of the following:

However, the new boot environment's target must be a Solaris Volume Manager RAID-1 volume. For example, the slice that is designated for a copy of the root (/) file system must be /dev/vx/dsk/rootvol. rootvol is the volume that contains the root (/) file system.

In this example, the current boot environment contains the root (/) file system on a volume that is not a Solaris Volume Manager volume. The new boot environment is created with the root (/) file system on the Solaris Volume Manager RAID-1 volume c0t2d0s0. The lucreate command migrates the current volume to the Solaris Volume Manager volume. The name of the new boot environment is svm_be. The lustatus command reports if the new boot environment is ready to be activated and be rebooted. The new boot environment is activated to become the current boot environment.


# lucreate -n svm_be -m /:/dev/md/dsk/d1:mirror,ufs \  
-m /:/dev/dsk/c0t2d0s0:attach
# lustatus
# luactivate svm_be
# lustatus
# init 6

Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive

The following procedures cover the three-step process:

The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When you use the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices. The boot environment is then activated.

To Create an Empty Boot Environment

In this first step, an empty boot environment is created. Slices are reserved for the file systems that are specified, but no copy of file systems from the current boot environment occurs. The new boot environment is named second_disk.


# lucreate  -s - -m /:/dev/dsk/c0t1d0s0:ufs \  
-n second_disk

The boot environment is ready to be populated with a Solaris Flash archive.

Figure 9–3 shows the creation of an empty boot environment.

Figure 9–3 Creating an Empty Boot Environment

The context describes the illustration.

To Install a Solaris Flash Archive on the New Boot Environment

In this second step, an archive is installed on the second_disk boot environment that was created in the previous example. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris 10 5/09 releases. The archive is named Solaris_10.flar.


# luupgrade -f -n second_disk \
-s /net/installmachine/export/Solaris_10/OS_image \ 
-a /net/server/archive/10.flar 

The boot environment is ready to be activated.

To Activate the New Boot Environment

In this last step, the second_disk boot environment is made bootable with the luactivate command. The system is then rebooted and second_disk becomes the active boot environment.


# luactivate second_disk
# init 6

Chapter 10 Solaris Live Upgrade (Command Reference)

The following list shows commands that you can type at the command line. The Solaris Live Upgrade includes man pages for all the listed command-line utilities.

Solaris Live Upgrade Command-Line Options

Task 

Command 

Activate an inactive boot environment. 

luactivate(1M)

Cancel a scheduled copy or create job. 

lucancel(1M)

Compare an active boot environment with an inactive boot environment. 

lucompare(1M)

Recopy file systems to update an inactive boot environment. 

lumake(1M)

Create a boot environment. 

lucreate(1M)

Name the active boot environment. 

lucurr(1M)

Delete a boot environment. 

ludelete(1M)

Add a description to a boot environment name. 

ludesc(1M)

List critical file systems for each boot environment. 

lufslist(1M)

Enable a mount of all of the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive. 

lumount(1M)

Rename a boot environment. 

lurename(1M)

List status of all boot environments. 

lustatus(1M)

Enable an unmount of all the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive. 

luumount(1M)

Upgrade an OS or install a flash archive on an inactive boot environment. 

luupgrade(1M)