This part provides an overview and instructions for using Solaris Live Upgrade to create and upgrade an inactive boot environment. The boot environment can then be switched to become the current boot environment.
This book provides information on how to use the Solaris Live Upgrade program to upgrade the Solaris operating system. This book provides all you need to know about using Solaris Live Upgrade, but a planning book in our collection of installation documentation might be useful to read before you begin. The following references provide useful information before you upgrade your system.
The Solaris 10 11/06 Installation Guide: Planning For Installation and Upgrade provides system requirements and high-level planning information, such as planning guidelines for file systems, and upgrade planning and much more. The following list describes the chapters in the planning book and provides links to those chapters.
Chapter Descriptions From the Planning Guide |
Reference |
---|---|
This chapter describes new features in the Solaris installation programs. | |
This chapter provides you with information about decisions you need to make before you install or upgrade the Solaris OS. Examples are deciding when to use a network installation image or DVD media and descriptions of all the Solaris installation programs. | |
This chapter describes system requirements to install or upgrade to the Solaris OS. General guidelines for planning the disk space and default swap space allocation are also provided. Upgrade limitations are also described. | |
This chapter contains checklists to help you gather all of the information that you need to install or upgrade your system. This information is useful, for example, if you are performing an interactive installation. You'll have all the information in the checklist that you'll need to do an interactive installation. | |
These chapters provide overviews of several technologies that relate to a Solaris OS installation or upgrade. Guidelines and requirements related to these technologies are also included. These chapters include information about GRUB based booting, Solaris Zones partitioning technology, and RAID-1 volumes that can be created at installation. |
This chapter describes the Solaris Live Upgrade process.
This book uses the term slice, but some Solaris documentation and programs might refer to a slice as a partition.
Solaris Live Upgrade provides a method of upgrading a system while the system continues to operate. While your current boot environment is running, you can duplicate the boot environment, then upgrade the duplicate. Or, rather than upgrading, you can install a Solaris Flash archive on a boot environment. The original system configuration remains fully functional and unaffected by the upgrade or installation of an archive. When you are ready, you can activate the new boot environment by rebooting the system. If a failure occurs, you can quickly revert to the original boot environment with a simple reboot. This switch eliminates the normal downtime of the test and evaluation process.
Solaris Live Upgrade enables you to duplicate a boot environment without affecting the currently running system. You can then do the following:
Upgrade a system.
Change the current boot environment's disk configuration to different file system types, sizes, and layouts on the new boot environment.
Maintain numerous boot environments with different images. For example, you can create one boot environment that contains current patches and create another boot environment that contains an Update release.
Some understanding of basic system administration is necessary before using Solaris Live Upgrade. For background information about system administration tasks such as managing file systems, mounting, booting, and managing swap, see the System Administration Guide: Devices and File Systems.
The following overview describes the tasks necessary to create a copy of the current boot environment, upgrade the copy, and switch the upgraded copy to become the active boot environment. The fallback process of switching back to the original boot environment is also described. Figure 2–1 describes this complete Solaris Live Upgrade process.
The following sections describe the Solaris Live Upgrade process.
The process of creating a boot environment provides a method of copying critical file systems from an active boot environment to a new boot environment. The disk is reorganized if necessary, file systems are customized, and the critical file systems are copied to the new boot environment.
Solaris Live Upgrade distinguishes between two file system types: critical file systems and shareable. The following table describes these file system types.
File System Type |
Description |
Examples and More Information |
---|---|---|
Critical file systems |
Critical file systems are required by the Solaris OS. These file systems are separate mount points in the vfstab of the active and inactive boot environments. These file systems are always copied from the source to the inactive boot environment. Critical file systems are sometimes referred to as nonshareable. |
Examples are root (/), /usr, /var, or /opt. |
Shareable file systems |
Shareable file systems are user-defined files such as /export that contain the same mount point in the vfstab in both the active and inactive boot environments. Therefore, updating shared files in the active boot environment also updates data in the inactive boot environment. When you create a new boot environment, shareable file systems are shared by default. But you can specify a destination slice and then the file systems are copied. |
/export is an example of a file system that can be shared. For more detailed information about shareable file systems, see Guidelines for Selecting Slices for Shareable File Systems. |
Swap |
Swap is a special shareable file system. Like a shareable file system, all swap slices are shared by default. But, if you specify a destination directory for swap, the swap slice is copied. |
For procedures about reconfiguring swap, see the following:
|
Solaris Live Upgrade can create a boot environment with RAID-1 volumes (mirrors) on file systems. For an overview, see Creating a Boot Environment With RAID-1 Volume File Systems.
The process of creating a new boot environment begins by identifying an unused slice where a critical file system can be copied. If a slice is not available or a slice does not meet the minimum requirements, you need to format a new slice.
After the slice is defined, you can reconfigure the file systems on the new boot environment before the file systems are copied into the directories. You reconfigure file systems by splitting and merging them, which provides a simple way of editing the vfstab to connect and disconnect file system directories. You can merge file systems into their parent directories by specifying the same mount point. You can also split file systems from their parent directories by specifying different mount points.
After file systems are configured on the inactive boot environment, you begin the automatic copy. Critical file systems are copied to the designated directories. Shareable file systems are not copied, but are shared. The exception is that you can designate some shareable file systems to be copied. When the file systems are copied from the active to the inactive boot environment, the files are directed to the new directories. The active boot environment is not changed in any way.
For procedures to split or merging file systems |
|
For an overview of creating a boot environment with RAID–1 volume file systems |
The following figures illustrate various ways of creating new boot environments.
Figure 2–2 shows that critical file system root (/) has been copied to another slice on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. The new boot environment is an exact duplicate with the root (/) file system on a new slice. The file systems /swap and /export/home are shared by the active and inactive boot environments.
Figure 2–3 shows critical file systems that have been split and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system on one slice. On that slice, the root (/) file system contains the /usr, /var, and /opt directories. In the new boot environment, the root (/) file system is split and /usr and /opt are put on separate slices. The file systems /swap and /export/home are shared by both boot environments.
Figure 2–4 shows critical file systems that have been merged and have been copied to slices on a disk to create a new boot environment. The active boot environment contains the root (/) file system, /usr, /var, and /opt with each file system on their own slice. In the new boot environment, /usr and /opt are merged into the root (/) file system on one slice. The file systems /swap and /export/home are shared by both boot environments.
Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems encapsulated in RAID-1 volumes. Solaris Volume Manager provides a powerful way to reliably manage your disks and data by using volumes. Solaris Volume Manager enables concatenations, stripes, and other complex configurations. Solaris Live Upgrade enables a subset of these tasks, such as creating a RAID-1 volume for the root (/) file system.
A volume can group disk slices across several disks to transparently appear as a single disk to the OS. Solaris Live Upgrade is limited to creating a boot environment for the root (/) file system that contains single-slice concatenations inside a RAID-1 volume (mirror). This limitation is because the boot PROM is restricted to choosing one slice from which to boot.
When creating a boot environment, you can use Solaris Live Upgrade to manage the following tasks.
Detach a single-slice concatenation (submirror) from a RAID-1 volume (mirror). The contents can be preserved to become the content of the new boot environment if necessary. Because the contents are not copied, the new boot environment can be quickly created. After the submirror is detached from the original mirror, the submirror is no longer part of the mirror. Reads and writes on the submirror are no longer performed through the mirror.
Create a boot environment that contains a mirror.
Attach a maximum of three single-slice concatenations to the newly created mirror.
You use the lucreate command with the -m option to create a mirror, detach submirrors, and attach submirrors for the new boot environment.
If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
For step-by-step procedures |
To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface) |
For an overview of creating RAID-1 volumes when installing | |
For in-depth information about other complex Solaris Volume Manager configurations that are not supported if you are using Solaris Live Upgrade |
Chapter 2, Storage Management Concepts, in Solaris Volume Manager Administration Guide |
Solaris Live Upgrade manages a subset of Solaris Volume Manager tasks. Table 2–1 shows the Solaris Volume Manager components that Solaris Live Upgrade can manage.
Table 2–1 Classes of Volumes
Term |
Description |
---|---|
A RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a mirror. |
|
A RAID-1 volume. See RAID-1 volume. |
|
A class of volume that replicates data by maintaining multiple copies. A RAID-1 volume is sometimes called a mirror. A RAID-1 volume is composed of one or more RAID-0 volumes that are called submirrors. |
|
A class of volume that can be a stripe or a concatenation. These components are also called submirrors. A stripe or concatenation is the basic building block for mirrors. |
|
A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. The state database tracks the location and status of all known state database replicas. |
|
state database replica |
A copy of a state database. The replica ensures that the data in the database is valid. |
See RAID-0 volume. |
|
A group of physical slices or other volumes that appear to the system as a single logical device. A volume is functionally identical to a physical disk in the view of an application or file system. In some command-line utilities, a volume is called a metadevice. |
The following examples present command syntax for creating RAID-1 volumes for a new boot environment.
Figure 2–5 shows a new boot environment with a RAID-1 volume (mirror) that is created on two physical disks. The following command created the new boot environment and the mirror.
# lucreate -n second_disk -m /:/dev/md/dsk/d30:mirror,ufs \ -m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d31:attach -m /:/dev/dsk/c0t2d0s0,/dev/md/dsk/d32:attach \ -m -:/dev/dsk/c0t1d0s1:swap -m -:/dev/dsk/c0t2d0s1:swap |
This command performs the following tasks:
Creates a new boot environment, second_disk.
Creates a mirror d30 and configures a UFS file system.
Creates a single-device concatenation on slice 0 of each physical disk. The concatenations are named d31 and d32.
Adds the two concatenations to mirror d30.
Copies the root (/) file system to the mirror.
Configures files systems for swap on slice 1 of each physical disk.
Figure 2–6 shows a new boot environment that contains a RAID-1 volume (mirror). The following command created the new boot environment and the mirror.
# lucreate -n second_disk -m /:/dev/md/dsk/d20:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve |
This command performs the following tasks:
Creates a new boot environment, second_disk.
Breaks mirror d10 and detaches concatenation d12.
Preserves the contents of concatenation d12. File systems are not copied.
Creates a new mirror d20. You now have two one-way mirrors d10 and d20.
Attaches concatenation d12 to mirror d20.
After you have created a boot environment, you can perform an upgrade on the boot environment. As part of that upgrade, the boot environment can contain RAID-1 volumes (mirrors) for any file systems. The upgrade does not affect any files in the active boot environment. When you are ready, you activate the new boot environment, which then becomes the current boot environment.
For procedures about upgrading a boot environment | |
For an example of upgrading a boot environment with a RAID–1 volume file system |
Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (Command-Line Interface) |
Figure 2–7 shows an upgrade to an inactive boot environment.
Rather than an upgrade, you can install a Solaris Flash archive on a boot environment. The Solaris Flash installation feature enables you to create a single reference installation of the Solaris OS on a system. This system is called the master system. Then, you can replicate that installation on a number of systems that are called clone systems. In this situation, the inactive boot environment is a clone. When you install the Solaris Flash archive on a system, the archive replaces all the files on the existing boot environment as an initial installation would.
For procedures about installing a Solaris Flash archive, see Installing Solaris Flash Archives on a Boot Environment.
The following figures show an installation of a Solaris Flash archive on an inactive boot environment. Figure 2–8 shows a system with a single hard disk. Figure 2–9 shows a system with two hard disks.
When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain system files and directories are copied from the last-active boot environment to the boot environment being booted. When you reboot the system, the configuration that you installed on the new boot environment is active. The original boot environment then becomes an inactive boot environment.
For procedures about activating a boot environment | |
For information about synchronizing the active and inactive boot environment |
Figure 2–10 shows a switch after a reboot from an inactive to an active boot environment.
If a failure occurs, you can quickly fall back to the original boot environment with an activation and reboot. The use of fallback takes only the time to reboot the system, which is much quicker than backing up and restoring the original. The new boot environment that failed to boot is preserved. The failure can then be analyzed. You can only fall back to the boot environment that was used by luactivate to activate the new boot environment.
You fall back to the previous boot environment the following ways:
Problem |
Action |
---|---|
The new boot environment boots successfully, but you are not happy with the results. |
Run the luactivate command with the name of the previous boot environment and reboot. x86 only – Starting with the Solaris 10 1/06 release, you can fall back by selecting the original boot environment that is found on the GRUB menu. The original boot environment and the new boot environment must be based on the GRUB software. Booting from the GRUB menu does not synchronize files between the old and new boot environments. For more information about synchronizing files, see Forcing a Synchronization Between Boot Environments. |
The new boot environment does not boot. |
Boot the fallback boot environment in single-user mode, run the luactivate command, and reboot. |
You cannot boot in single-user mode. |
Perform one of the following:
|
For procedures to fall back, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).
Figure 2–11 shows the switch that is made when you reboot to fallback.
You can also do various maintenance activities such as checking status, renaming, or deleting a boot environment. For maintenance procedures, see Chapter 7, Maintaining Solaris Live Upgrade Boot Environments (Tasks).
This chapter provides guidelines and requirements for review before installing and using Solaris Live Upgrade. You also should review general information about upgrading in Upgrade Planning in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade. This chapter contains the following sections:
Before you install and use Solaris Live Upgrade, become familiar with these requirements.
Solaris Live Upgrade is included in the Solaris software. You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 11/06 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 11/06 release.
Table 3–1 lists releases that are supported by Solaris Live Upgrade.
Table 3–1 Supported Solaris Releases
Your Current Release |
Compatible Upgrade Release |
---|---|
Solaris 8 OS |
Solaris 8, 9, or any Solaris 10 release |
Solaris 9 OS |
Solaris 9 or any Solaris 10 release |
Solaris 10 OS |
Any Solaris 10 release |
You can install the Solaris Live Upgrade packages by using the following:
The pkgadd command. The Solaris Live Upgrade packages are SUNWlur and SUNWluu, and these packages must be installed in that order.
An installer on the Solaris Operating System DVD, the Solaris Software - 2 CD, or a net installation image.
Be aware that the following patches might need to be installed for the correct operation of Solaris Live Upgrade.
Description |
For More Information |
|
---|---|---|
Caution: Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches. x86 only – If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.
The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. |
Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site. |
|
If you are running the Solaris 8 or 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment recommended to run the Solaris Live Upgrade installer and install the packages. |
To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com. |
For instructions about installing the Solaris Live Upgrade software, see Installing Solaris Live Upgrade.
If you have problems with Solaris Live Upgrade, you might be missing packages. In the following table, check that your OS has the listed packages , which are required to use Solaris Live Upgrade.
For the Solaris 10 release:
If you install one of the following software groups, these software groups contain all the required Solaris Live Upgrade packages.
Entire Solaris Software Group Plus OEM Support
Entire Solaris Software Group
Developer Solaris Software Group
End User Solaris Software Group
If you install one of these Software Groups, then you might not have all the packages required to use Solaris Live Upgrade.
Core System Support Software Group
Reduced Network Support Software Group
For information about software groups, see Disk Space Recommendations for Software Groups in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
Table 3–2 Required Packages for Solaris Live Upgrade
To check for packages on your system, type the following command.
% pkginfo package_name |
Follow general disk space requirements for an upgrade. See Chapter 4, System Requirements, Guidelines, and Upgrade (Planning), in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
To estimate the file system size that is needed to create a boot environment, start the creation of a new boot environment. The size is calculated. You can then abort the process.
The disk on the new boot environment must be able to serve as a boot device. Some systems restrict which disks can serve as a boot device. Refer to your system's documentation to determine if any boot restrictions apply.
The disk might need to be prepared before you create the new boot environment. Check that the disk is formatted properly:
Identify slices large enough to hold the file systems to be copied.
Identify file systems that contain directories that you want to share between boot environments rather than copy. If you want a directory to be shared, you need to create a new boot environment with the directory put on its own slice. The directory is then a file system and can be shared with future boot environments. For more information about creating separate file systems for sharing, see Guidelines for Selecting Slices for Shareable File Systems.
Solaris Live Upgrade uses Solaris Volume Manager technology to create a boot environment that can contain file systems that are RAID-1 volumes (mirrors). Solaris Live Upgrade does not implement the full functionality of Solaris Volume Manager, but does require the following components of Solaris Volume Manager.
Table 3–3 Required Components for Solaris Live Upgrade and RAID-1 Volumes
Requirement |
Description |
For More Information |
---|---|---|
You must create at least one state database and at least three state database replicas. |
A state database stores information about disk about the state of your Solaris Volume Manager configuration. The state database is a collection of multiple, replicated database copies. Each copy is referred to as a state database replica. When a state database is copied, the replica protects against data loss from single points of failure. |
For information about creating a state database, see Chapter 6, State Database (Overview), in Solaris Volume Manager Administration Guide. |
Solaris Live Upgrade supports only a RAID-1 volume (mirror) with single-slice concatenations on the root (/) file system. |
A concatenation is a RAID-0 volume. If slices are concatenated, the data is written to the first available slice until that slice is full. When that slice is full, the data is written to the next slice, serially. A concatenation provides no data redundancy unless it is contained in a RAID-1 volume A RAID—1 volume can be comprised of a maximum of three concatenations. |
For guidelines about creating mirrored file systems, see Guidelines for Selecting Slices for Mirrored File Systems. |
You can use Solaris Live Upgrade to add patches and packages to a system. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command. When you use luupgrade command, you can also use a Solaris Flash archive to install patches or packages.
When upgrading and adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software during an upgrade to fail or alter the active boot environment.
For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).
Type of Installation |
Description |
For More Information |
---|---|---|
Adding patches to a boot environment |
Create a new boot environment and use the luupgrade command with the -t option. |
To Add Patches to an Operating System Image on a Boot Environment (Command-Line Interface). |
Adding packages to a boot environment |
Use the luupgrade command with the -p option. |
To Add Packages to an Operating System Image on a Boot Environment (Command-Line Interface) |
Using Solaris Live Upgrade to install a Solaris Flash archive |
An archive contains a complete copy of a boot environment with new packages and patches already included. This copy can be installed on multiple systems. |
|
The lucreate -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. When using the -m option to create file systems, follow these guidelines:
You must specify one -m option for the root (/) file system for the new boot environment. If you run lucreate without the -m option, the Configuration menu is displayed. The Configuration menu enables you to customize the new boot environment by redirecting files onto new mount points.
Any critical file systems that exist in the current boot environment and that are not specified in a -m option are merged into the next highest-level file system created.
Only the file systems that are specified by the -m option are created on the new boot environment. To create the same number of files systems that is on your current system, you must specify one -m option for each file system to be created.
For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into the one file system that is specified by the -m option. If you specify the -m option twice, you create two file systems. If you have file systems for root (/), /opt, and /var, you would use one -m option for each file system on the new boot environment.
Do not duplicate a mount point. For example, you cannot have two root (/) file systems.
When you create file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from creating invalid configurations for critical file systems. For example, you could type a lucreate command that would create separate file systems for root (/) and /kernel which is an invalid division of the root (/) file system.
Do not overlap slices when reslicing disks. If this condition exists, the new boot environment appears to have been created, but when activated, the boot environment does not boot. The overlapping file systems might be corrupted.
For Solaris Live Upgrade to work properly, the vfstab file on the active boot environment must have valid contents and must have an entry for the root (/) file system at the minimum.
When you create an inactive boot environment, you need to identify a slice where the root (/) file system is to be copied. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following:
Must be a slice from which the system can boot.
Must meet the recommended minimum size.
Can be on different physical disks or the same disk as the active root (/) file system.
Can be a Veritas Volume Manager volume (VxVM). If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
You can create a new boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. Critical file systems that are copied to the new boot environment can be of the following types:
A physical slice.
A single-slice concatenation that is included in a RAID-1 volume (mirror). The slice that contains the root (/) file system can be a RAID-1 volume.
A single-slice concatenation that is included in a RAID-0 volume. The slice that contains the root (/) file system can be a RAID-0 volume.
When you create a new boot environment, the lucreate -m command recognizes the following three types of devices:
A physical slice in the form of /dev/dsk/cwtxdysz
A Solaris Volume Manager volume in the form of /dev/md/dsk/dnum
A Veritas Volume Manager volume in the form of /dev/vx/dsk/volume_name. If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
If you have problems upgrading with Veritas VxVM, see System Panics When Upgrading With Solaris Live Upgrade Running Veritas VxVm.
Use the following guidelines to check if a RAID-1 volume is busy, resyncing, or if volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.
For volume naming guidelines, see RAID Volume Name Requirements and Guidelines for Custom JumpStart and Solaris Live Upgrade in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
If a mirror or submirror needs maintenance or is busy, components cannot be detached. You should use the metastat command before creating a new boot environment and using the detach keyword. The metastat command checks if the mirror is in the process of resynchronization or if the mirror is in use. For information, see the man page metastat(1M).
If you use the detach keyword to detach a submirror, lucreate checks if a device is currently resyncing. If the device is resyncing, you cannot detach the submirror and you see an error message.
Resynchronization is the process of copying data from one submirror to another submirror after the following problems:
Submirror failures.
System crashes.
A submirror has been taken offline and brought back online.
The addition of a new submirror.
For more information about resynchronization, see RAID-1 Volume (Mirror) Resynchronization in Solaris Volume Manager Administration Guide.
Use the lucreate command rather than Solaris Volume Manager commands to manipulate volumes on inactive boot environments. The Solaris Volume Manager software has no knowledge of boot environments, whereas the lucreate command contains checks that prevent you from inadvertently destroying a boot environment. For example, lucreate prevents you from overwriting or deleting a Solaris Volume Manager volume.
However, if you have already used Solaris Volume Manager software to create complex Solaris Volume Manager concatenations, stripes, and mirrors, you must use Solaris Volume Manager software to manipulate them. Solaris Live Upgrade is aware of these components and supports their use. Before using Solaris Volume Manager commands that can create, modify, or destroy volume components, use the lustatus or lufslist commands. These commands can determine which Solaris Volume Manager volumes contain file systems that are in use by a Solaris Live Upgrade boot environment.
These guidelines contain configuration recommendations and examples for a swap slice.
You can configure a swap slice in three ways by using the lucreate command with the -m option:
If you do not specify a swap slice, the swap slices belonging to the current boot environment are configured for the new boot environment.
If you specify one or more swap slices, these slices are the only swap slices that are used by the new boot environment. The two boot environments do not share any swap slices.
You can specify to both share a swap slice and add a new slice for swap.
The following examples show the three ways of configuring swap. The current boot environment is configured with the root (/) file system on c0t0d0s0. The swap file system is on c0t0d0s1.
In the following example, no swap slice is specified. The new boot environment contains the root (/) file system on c0t1d0s0. Swap is shared between the current and new boot environment on c0t0d0s1.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs |
In the following example, a swap slice is specified. The new boot environment contains the root (/) file system on c0t1d0s0. A new swap file system is created on c0t1d0s1. No swap slice is shared between the current and new boot environment.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs -m -:/dev/dsk/c0t1d0s1:swap |
In the following example, a swap slice is added and another swap slice is shared between the two boot environments. The new boot environment contains the root (/) file system on c0t1d0s0. A new swap slice is created on c0t1d0s1. The swap slice on c0t0d0s1 is shared between the current and new boot environment.
# lucreate -n be2 -m /:/dev/dsk/c0t1d0s0:ufs -m -:shared:swap -m -:/dev/dsk/c0t1d0s1:swap |
A boot environment creation fails if the swap slice is being used by any boot environment except for the current boot environment. If the boot environment was created using the -s option, the alternate-source boot environment can use the swap slice, but not any other boot environment.
Solaris Live Upgrade copies the entire contents of a slice to the designated new boot environment slice. You might want some large file systems on that slice to be shared between boot environments rather than copied to conserve space and copying time. File systems that are critical to the OS such as root (/) and /var must be copied. File systems such as /home are not critical file systems and could be shared between boot environments. Shareable file systems must be user-defined file systems and on separate swap slices on both the active and new boot environments. You can reconfigure the disk several ways, depending your needs.
Reconfiguring a disk |
Examples |
For More Information |
---|---|---|
You can reslice the disk before creating the new boot environment and put the shareable file system on its own slice. |
For example, if the root (/) file system, /var, and /home are on the same slice, reconfigure the disk and put /home on its own slice. When you create any new boot environments, /home is shared with the new boot environment by default. | |
If you want to share a directory, the directory must be split off to its own slice. The directory is then a file system that can be shared with another boot environment. You can use the lucreate command with the -m option to create a new boot environment and split a directory off to its own slice. But, the new file system cannot yet be shared with the original boot environment. You need to run the lucreate command with the -m option again to create another boot environment. The two new boot environments can then share the directory. |
For example, if you wanted to upgrade from the Solaris 9 release to the Solaris 10 11/06 release and share /home, you could run the lucreate command with the -m option. You could create a Solaris 9 release with /home as a separate file system on its own slice. Then run the lucreate command with the -m option again to duplicate that boot environment. This third boot environment can then be upgraded to the Solaris 10 11/06 release. /home is shared between the Solaris 9 and Solaris 10 11/06 releases. |
For a description of shareable and critical file systems, see File System Types. |
When you create a new boot environment, some directories and files can be excluded from a copy to the new boot environment. If you have excluded a directory, you can also reinstate specified subdirectories or files under the excluded directory. These subdirectories or files that have been restored are then copied to the new boot environment. For example, you could exclude from the copy all files and directories in /etc/mail, but include all files and directories in /etc/mail/staff. The following command copies the staff subdirectory to the new boot environment.
# lucreate -n second_disk -x /etc/mail -y /etc/mail/staff |
Use the file-exclusion options with caution. Do not remove files or directories that are required by the system.
The following table lists the lucreate command options for removing and restoring directories and files.
How Specified? |
Exclude Options |
Include Options |
---|---|---|
Specify the name of the directory or file |
-x exclude_dir |
-y include_dir |
Use a file that contains a list |
-f list_filename -z list_filename |
-Y list_filename -z list_filename |
For examples of customizing the directories and files when creating a boot environment, see To Create a Boot Environment and Customize the Content (Command-Line Interface).
When you are ready to switch and make the new boot environment active, you quickly activate the new boot environment and reboot. Files are synchronized between boot environments the first time that you boot a newly created boot environment. “Synchronize” means that certain critical system files and directories might be copied from the last-active boot environment to the boot environment being booted. Those files and directories that have changed are copied.
Solaris Live Upgrade checks for critical files that have changed. If these files' content is not the same in both boot environments, they are copied from the active boot environment to the new boot environment. Synchronizing is meant for critical files such as /etc/passwd or /etc/group files that might have changed since the new boot environment was created.
The /etc/lu/synclist file contains a list of directories and files that are synchronized. In some instances, you might want to copy other files from the active boot environment to the new boot environment. You can add directories and files to /etc/lu/synclist if necessary.
Adding files not listed in the /etc/lu/synclist could cause a system to become unbootable. The synchronization process only copies files and creates directories. The process does not remove files and directories.
The following example of the /etc/lu/synclist file shows the standard directories and files that are synchronized for this system.
/var/mail OVERWRITE /var/spool/mqueue OVERWRITE /var/spool/cron/crontabs OVERWRITE /var/dhcp OVERWRITE /etc/passwd OVERWRITE /etc/shadow OVERWRITE /etc/opasswd OVERWRITE /etc/oshadow OVERWRITE /etc/group OVERWRITE /etc/pwhist OVERWRITE /etc/default/passwd OVERWRITE /etc/dfs OVERWRITE /var/log/syslog APPEND /var/adm/messages APPEND |
Examples of directories and files that might be appropriate to add to the synclist file are the following:
/var/yp OVERWRITE /etc/mail OVERWRITE /etc/resolv.conf OVERWRITE /etc/domainname OVERWRITE |
The synclist file entries can be files or directories. The second field is the method of updating that occurs on the activation of the boot environment. You can choose from three methods to update files:
OVERWRITE – The contents of the active boot environment's file overwrites the contents of the new boot environment file. OVERWRITE is the default action if no action is specified in the second field. If the entry is a directory, all subdirectories are copied. All files are overwritten. The new boot environment file has the same date, mode, and ownership as the same file on the previous boot environment.
APPEND – The contents of the active boot environment's file are added to the end of the new boot environment's file. This addition might lead to duplicate entries in the file. Directories cannot be listed as APPEND. The new boot environment file has the same date, mode, and ownership as the same file on the previous boot environment.
PREPEND – The contents of the active boot environment's file are added to the beginning of the new boot environment's file. This addition might lead to duplicate entries in the file. Directories can not be listed as PREPEND. The new boot environment file has the same date, mode, and ownership as the same file on the previous boot environment.
The first time you boot from a newly created boot environment, Solaris Live Upgrade synchronizes the new boot environment with the boot environment that was last active. After this initial boot and synchronization, Solaris Live Upgrade does not perform a synchronization unless requested.
To force synchronization by using the CUI, you type yes when prompted.
To force synchronization by using the CLI, you use the luactivate command with the -s option.
You might want to force a synchronization if you are maintaining multiple versions of the Solaris OS. You might want changes in files such as email or passwd/group to be in the boot environment you are activating to. If you force a synchronization, Solaris Live Upgrade checks for conflicts between files that are subject to synchronization. When the new boot environment is booted and a conflict is detected, a warning is issued and the files are not synchronized. Activation can be completed successfully, despite such a conflict. A conflict can occur if you make changes to the same file on both the new boot environment and the active boot environment. For example, you make changes to the /etc/passwd file on the original boot environment. Then you make other changes to /etc/passwd file on the new boot environment. The synchronization process cannot choose which file to copy for the synchronization.
Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.
Starting with the Solaris 10 1/06 release, a GRUB boot menu provides an optional method of switching between boot environments. The GRUB menu is an alternative to activating with the luactivate command or the Activate menu.
Task |
Information |
---|---|
To activate a boot environment with the GRUB menu |
x86: To Activate a Boot Environment With the GRUB Menu (Command-Line Interface) |
To fall back to the original boot environment with a GRUB menu |
x86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu |
For overview and planning information for GRUB | |
For a complete GRUB overview and system administration tasks |
When viewing the character user interface remotely, such as over a tip line, you might need to set the TERM environment variable to VT220. Also, when using the Common Desktop Environment (CDE), set the value of the TERM variable to dtterm, rather than xterm.
This chapter explains how to install Solaris Live Upgrade, use the menus, and to create a boot environment. This chapter contains the following sections:
Task Map: Installing Solaris Live Upgrade and Creating Boot Environments
Starting and Stopping Solaris Live Upgrade (Character User Interface)
You can run Solaris Live Upgrade with a character user interface (CUI) or the command-line interface (CLI). Procedures for both the CUI and CLI are provided in the following sections.
Interface Type |
Description |
---|---|
Character user interface (CUI) |
The CUI does not provide access to all features of Solaris Live Upgrade. The CUI does not run in multibyte locales and 8-bit locales. |
Command-line interface (CLI) |
The CLI procedures in this document cover the basic uses of the Solaris Live Upgrade commands. See Chapter 10, Solaris Live Upgrade (Command Reference) for a list of commands and also see the appropriate, associated man pages for more options to use with these commands. |
Navigation through the menus of the Solaris Live Upgrade character user interface requires that you use arrow keys and function keys. Use arrow keys to navigate up and down before making a selection or to place the cursor in a field. To perform a task, use the function keys. At the bottom of the menu, you see black rectangles that represent function keys on the keyboard. For example, the first black rectangle represents F1 and the second black rectangle represents F2. Rectangles that are active contain a word that represents a task, such as Save. The Configuration menu notes the function key number plus the task, rather than a rectangle.
F3 is always SAVE and completes the task for that menu.
F6 is always CANCEL and exits the menu without saving changes.
Other function keys' tasks vary, depending on the menu.
In the following procedures, you might be asked to press a function key. If your function keys do not properly map to the function keys on the Solaris Live Upgrade menus, use Control-F plus the appropriate number.
Task |
Description |
For Instructions |
---|---|---|
Install patches on your system |
Solaris Live Upgrade requires a limited set of patch revisions | |
Install Solaris Live Upgrade packages |
Install packages on your OS | |
Start Solaris Live Upgrade |
Start the Solaris Live Upgrade main menu |
Starting and Stopping Solaris Live Upgrade (Character User Interface) |
Create a boot environment |
Copy and reconfigure file systems to an inactive boot environment |
You need to install the Solaris Live Upgrade packages on your current OS. The release of the Solaris Live Upgrade packages must match the release of the OS you are upgrading to. For example, if your current OS is the Solaris 9 release and you want to upgrade to the Solaris 10 11/06 release, you need to install the Solaris Live Upgrade packages from the Solaris 10 11/06 release.
Some patches might be required. Install these patches before you install Solaris Live Upgrade packages. For more information, see the following:
Description |
For More Information |
|
---|---|---|
Caution – Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches. x86 only – If this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.
The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. |
Ensure you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site. |
|
If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. |
To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available at http://sunsolve.sun.com. |
From the SunSolveSM web site, obtain the list of patches.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Install the patches with the patchadd command.
# patchadd path_to_patches |
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Solaris Live Upgrade fails.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Install the packages in the following order.
# pkgadd -d path_to_packages SUNWlur SUNWluu |
Specifies the absolute path to the software packages.
Verify that the package has been installed successfully.
# pkgchk -v SUNWlur SUNWluu |
This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.
Insert the Solaris Operating System DVD or Solaris Software - 2 CD.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Run the installer for the media you are using.
If you are using the Solaris Operating System DVD, change directories to the installer and run the installer.
For SPARC based systems:
# cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers # ./liveupgrade20 |
For x86 based systems:
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers # ./liveupgrade20 |
The Solaris installation program GUI is displayed.
If you are using the Solaris Software - 2 CD, run the installer.
% ./installer |
The Solaris installation program GUI is displayed.
From the Select Type of Install panel, click Custom.
On the Locale Selection panel, click the language to be installed.
Choose the software to install.
For DVD, on the Component Selection panel, click Next to install the packages.
For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click on the other software choices to deselect them.
Follow the directions on the Solaris installation program panels to install the software.
This procedure starts and stops the Solaris Live Upgrade menu program.
When viewing the character user interface remotely, such as over a tip line, you might need to set the TERM environment variable to VT220. Also, when using the Common Desktop Environment (CDE), set the value of the TERM variable to dtterm, rather than xterm.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/lu |
The Solaris Live Upgrade main menu is displayed.
From the main menu, select Exit.
Creating a boot environment provides a method of copying critical file systems from the active boot environment to a new boot environment. The CUI's Create menu and Configuration submenu, and the lucreate command enable reorganizing a disk if necessary, customizing file systems, and copying the critical file systems to the new boot environment.
Before file systems are copied to the new boot environment, they can be customized so that critical file system directories are either merged into their parent directory or split from their parent directory. User-defined (shareable) file systems are shared between boot environments by default. But shareable file systems can be copied if needed. Swap, which is a shareable file system, can be split and merged also. For an overview of critical and shareable file systems, see File System Types.
From the main menu, select Create.
The system displays the Create a Boot Environment submenu.
Type the name of the active boot environment (if necessary) and the new boot environment and confirm. You are only required to type the name of the active boot environment the first time you create a boot environment.
The boot environment name can be no longer than 30 characters, can contain only alphanumeric characters, and can contain no multibyte characters.
Name of Current Boot Environment: solaris8 Name of New Boot Environment: solaris10 |
To save your changes, press F3.
The configuration menu appears.
The configuration menu contains the following parts:
The original boot environment is located at the top of the screen. The boot environment to be created is at the bottom.
The Device field contains the following information.
The name of a disk device of the form /dev/dsk/cwtxdysz.
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum.
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name.
The area for selecting a critical file system is blank until you select a critical file system. The critical file systems such as /usr, /var, or /opt can be split or merged with the root (/) file system.
Shareable file systems such as /export or swap are displayed in the Device field. These file systems contain the same mount point in both the source and target boot environments. Swap is shared by default, but you can also split and merge (add and remove) swap slices.
For an overview of critical and shareable file systems, see File System Types.
The FS_Type field enables you to change file system type. The file system type can be one of the following:
vxfs, which indicates a Veritas file system
swap, which indicates a swap file system
ufs, which indicates a UFS file system
(Optional) The following tasks can be done at any time:
To print the information onscreen to an ASCII file, press F5.
To scroll through the file system list, press Control-X.
You can then switch between the file systems of the active and new boot environment and scroll.
To exit the Configuration menu at any time, press F6.
If you are in the Configuration menu, changes are not saved and file systems are not altered.
If you are in a Configuration submenu, you return to the Configuration menu.
Select an available slice by pressing F2.
The Choices menu displays available slices on the system for the field where the cursor is placed. The menu displays a device field and a file system FS_Type field.
Use the arrow keys to place the cursor in a field to select a slice or file system type.
When you place your cursor in the Device field, all free slices are displayed. For the root (/) file system, the Choices menu only displays free slices that meet the root (/) file system limitations. See Guidelines for Selecting a Slice for the root (/) File System.
When you place your cursor in the FS_Type field, all available file system types are displayed.
Slices in bold can be selected for the current file system. The size of the slice is estimated by adding the size of the file system plus 30 percent to accommodate an upgrade.
Slices not in bold are too small to support the given file system. To reslice a disk, see Step 6.
Press Return to choose a slice.
The slice appears in the Device field or the file system type changes in the FS_Type field.
(Optional) If available slices do not meet the minimum requirements, to reslice any available disks, press F4.
The Solaris Live Upgrade Slice Configuration menu appears.
The format(1M) command runs, which enables you to create new slices. Follow the screen to create a new slice.
To navigate through this menu, use the arrow keys to move between the Device field and FS_Type field. The Size (Mbytes) field is automatically completed as the devices are selected.
(Optional) Splitting critical file systems puts the file systems on separate mount points. To split a file system, do the following:
(To merge file systems, see Step 8).
Select the file system to split.
You can split or exclude file systems such as /usr, /var, or /opt from their parent directory.
When creating file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from making invalid configurations on critical file systems. For example, you could enter a lucreate command that would create separate file systems for root (/) and /kernel, which is an invalid division of the root (/) file system.
Press F8.
Type the file system name for the new boot environment, for example:
Enter the directory that will be a separate file system on the new boot environment: /opt |
When the new file system is verified, a new line is added to the screen.
To return to the Configuration menu, press F3.
The Configuration menu is displayed.
(Optional) Merging puts the file systems on the same mount point. To merge a file system into its parent directory:
(To split file systems, see Step 7.)
Select the file system to merge.
You can merge file systems such as /usr, /var, or /opt into their parent directory.
Press F9.
The file systems that will be combined are displayed, for example:
/opt will be merged into /. |
Press Return.
To return to the Configuration menu, press F3.
The Configuration menu is displayed.
(Optional) Decide if you want to add or remove swap slices.
(Optional) To split a swap slice, do the following:
In the Device field, select the swap slice that you want to split.
Press F8.
At the prompt, type:
Enter the directory that will be a separate filesystem on the new BE: swap |
Press F2 Choice.
The Choice menu lists the available slices for swap.
Select the slice to put swap on.
The slice appears in the Device field and you have a new slice for swap.
(Optional) To remove a swap slice, do the following:
Decide if you want to create the boot environment now or schedule the creation for later:
Press F3 to create the new boot environment now.
The configuration is saved and you exit the configuration screen. The file systems are copied, the boot environment is made bootable, and an inactive boot environment is created.
Creating a boot environment might take an hour or more, depending on your system configuration. The Solaris Live Upgrade main menu is then displayed.
If you want to schedule the creation for a later time, type y, then the start time, and an email address, as in this example.
Do you want to schedule the copy? y Enter the time in 'at' format to schedule create: 8:15 PM Enter the address to which the copy log should be mailed: someone@anywhere.com |
You are notified of the completion by email.
For information about time formats, see the at(1) man page.
You can schedule only one job at a time.
After the creation is complete, the inactive boot environment is ready to be upgraded. See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
The lucreate command that is used with the -m option specifies which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into the one file system that is specified by the -m option. If you specify the -m option twice, you create two file systems. When using the -m option to create file systems, follow these guidelines:
You must specify one -m option for the root (/) file system for the new boot environment. If you run lucreate without the -m option, the Configuration menu is displayed. The Configuration menu enables you to customize the new boot environment by redirecting files onto new mount points.
Any critical file systems that exist in the current boot environment and are not specified in a -m option are merged into the next highest-level file system created.
Only the file systems that are specified by the -m option are created on the new boot environment. If your current boot environment contains multiple file systems, and you want to have the same number of file systems in the new boot environment created, you must specify one -m option for each file system to be created. For example, if you have file systems for root (/), /opt, and /var, you would use one -m option for each file system on the new boot environment.
Do not duplicate a mount point. For example, you cannot have two root (/) file systems.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To create the new boot environment, type:
# lucreate [-A 'BE_description'] -c BE_name \ -m mountpoint:device[,metadevice]:fs_options [-m ...] -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Assigns the name BE_name to the active boot environment. This option is not required and is only used when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.
The default name is chosen according to the following criteria:
If the physical boot device can be determined, then the base name of the physical boot device is used to name the current boot environment.
For example, if the physical boot device is /dev/dsk/c0t0d0s0, then the current boot environment is given the name c0t0d0s0.
If the physical boot device cannot be determined, then names from the uname command with the -s and -r options are combined to produce the name.
For example, if the uname -s returns the OS name of SunOS and the uname -r returns the release name of 5.9, then the name SunOS5.9 is given to the current boot environment.
If both of the above cannot determine the name, then the name current is used to name the current boot environment.
If you use the -c option after the first boot environment creation, the option is ignored or an error message is displayed.
If the name specified is the same as the current boot environment name, the option is ignored.
If the name specified is different than the current boot environment name, then an error message is displayed and the creation fails. The following example shows a boot environment name that causes an error message.
# lucurr c0t0d0s0 # lucreate -c /dev/dsk/c1t1d1s1 -n newbe -m /:/dev/dsk/c1t1d1s1:ufs ERROR: current boot environment name is c0t0d0s0: cannot change name using <-c c1t1d1s1> |
Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique on the system.
When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the active boot environment is named first_disk. The mount points for the file systems are noted by using the -m option. Two file systems are created, root (/) and /usr. The new boot environment is named second_disk. A description, mydescription, is associated with the name second_disk. Swap, in the new boot environment second_disk, is automatically shared from the source, first_disk.
# lucreate -A 'mydescription' -c first_disk -m /:/dev/dsk/c0t4d0s0:ufs \ -m /usr:/dev/dsk/c0t4d0s3:ufs -n second_disk |
You can use the lucreate command with the -m option to specify which file systems and the number of file systems to be created in the new boot environment. You must specify the exact number of file systems you want to create by repeating this option. For example, a single use of the -m option specifies where to put all the file systems. You merge all the file systems from the original boot environment into one file system. If you specify the -m option twice, you create two file systems.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lucreate -A 'BE_description' \ -m mountpoint:device[,metadevice]:fs_options \ -m [...] -m mountpoint:merged:fs_options -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique on the system.
When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the file systems on the current boot environment are root (/), /usr, and /opt. The /opt file system is combined with its parent file system /usr. The new boot environment is named second_disk. A description, mydescription, is associated with the name second_disk.
# lucreate -A 'mydescription' -c first_disk \ -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s1:ufs \ -m /usr/opt:merged:ufs -n second_disk |
When creating file systems for a boot environment, the rules are identical to the rules for creating file systems for the Solaris OS. Solaris Live Upgrade cannot prevent you from making invalid configurations on critical file systems. For example, you could enter an lucreate command that would create separate file systems for root (/) and /kernel, which is an an invalid division of the root (/) file system.
When splitting a directory into multiple mount points, hard links are not maintained across file systems. For example, if /usr/stuff1/file is hard linked to /usr/stuff2/file, and /usr/stuff1 and /usr/stuff2 are split into separate file systems, the link between the files no longer exists. lucreate issues a warning message and a symbolic link is created to replace the lost hard link.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lucreate [-A 'BE_description'] \ -m mountpoint:device[,metadevice]:fs_options \ -m mountpoint:device[,metadevice]:fs_options -n new_BE |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and contain any characters.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique on the system.
In this example, the preceding command splits the root (/) file system over multiple disk slices in the new boot environment. Assume a source boot environment that has /usr, /var, and /opt on root (/): /dev/dsk/c0t0d0s0 /.
On the new boot environment, separate /usr, /var, and /opt, mounting these file systems on their own slices, as follows:
/dev/dsk/c0t1d0s0 /
/dev/dsk/c0t1d0s1 /var
/dev/dsk/c0t1d0s7 /usr
/dev/dsk/c0t1d0s5 /opt
A description, mydescription, is associated with the boot environment name second_disk.
# lucreate -A 'mydescription' -c first_disk \ -m /:/dev/dsk/c0t1d0s0:ufs -m /usr:/dev/dsk/c0t1d0s7:ufs \ -m /var:/dev/dsk/c0t1d0s1:ufs -m /opt:/dev/dsk/c0t1d0s5:ufs \ -n second_disk |
When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
Swap slices are shared between boot environments by default. By not specifying swap with the -m option, your current and new boot environment share the same swap slices. If you want to reconfigure the new boot environment's swap, use the -m option to add or remove swap slices in the new boot environment.
The swap slice cannot be in use by any boot environment except the current boot environment or if the -s option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment, whether it is a swap, UFS, or any other file system.
You can create a boot environment with the existing swap slices and then edit the vfstab file after the creation.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lucreate [-A 'BE_description'] \ -m mountpoint:device[,metadevice]:fs_options \ -m -:device:swap -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique.
The new boot environment is created with swap moved to a different slice or device.
When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the current boot environment contains root (/) on /dev/dsk/c0t0d0s0 and swap is on /dev/dsk/c0t0d0s1. The new boot environment copies root (/) to /dev/dsk/c0t4d0s0 and uses both /dev/dsk/c0t0d0s1 and /dev/dsk/c0t4d0s1 as swap slices. A description, mydescription, is associated with the boot environment name second_disk.
# lucreate -A 'mydescription' -c first_disk \ -m /:/dev/dsk/c0t4d0s0:ufs -m -:/dev/dsk/c0t0d0s1:swap \ -m -:/dev/dsk/c0t4d0s1:swap -n second_disk |
These swap assignments are effective only after booting from second_disk. If you have a long list of swap slices, use the -M option. See To Create a Boot Environment and Reconfigure Swap by Using a List (Command-Line Interface).
If you have a long list of swap slices, create a swap list. lucreate uses this list for the swap slices in the new boot environment.
The swap slice cannot be in use by any boot environment except the current boot environment or if the -s option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment, whether the swap slice contains a swap, UFS, or any other file system.
Create a list of swap slices to be used in the new boot environment. The location and name of this file is user defined. In this example, the content of the /etc/lu/swapslices file is a list of devices and slices:
-:/dev/dsk/c0t3d0s2:swap -:/dev/dsk/c0t3d0s2:swap -:/dev/dsk/c0t4d0s2:swap -:/dev/dsk/c0t5d0s2:swap -:/dev/dsk/c1t3d0s2:swap -:/dev/dsk/c1t4d0s2:swap -:/dev/dsk/c1t5d0s2:swap |
Type:
# lucreate [-A 'BE_description'] \ -m mountpoint:device[,metadevice]:fs_options \ -M slice_list -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
List of -m options, which are collected in the file slice_list. Specify these arguments in the format that is specified for -m. Comment lines, which begin with a hash mark (#), are ignored. The -M option is useful when you have a long list of file systems for a boot environment. Note that you can combine -m and -M options. For example, you can store swap slices in slice_list and specify root (/) and /usr slices with -m.
The -m and -M options support the listing of multiple slices for a particular mount point. In processing these slices, lucreate skips any unavailable slices and selects the first available slice.
The name of the boot environment to be created. BE_name must be unique.
When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, swap in the new boot environment is the list of slices that are noted in the /etc/lu/swapslices file. A description, mydescription, is associated with the name second_disk.
# lucreate -A 'mydescription' -c first_disk \ -m /:/dev/dsk/c02t4d0s0:ufs -m /usr:/dev/dsk/c02t4d0s1:ufs \ -M /etc/lu/swapslices -n second_disk |
If you want a shareable file system to be copied to the new boot environment, specify the mount point to be copied with the -m option. Otherwise, shareable file systems are shared by default, and maintain the same mount point in the vfstab file. Any updating that is applied to the shareable file system is available to both boot environments.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Create the boot environment.
# lucreate [-A 'BE_description'] \ -m mountpoint:device[,metadevice]:fs_options \ -m mountpoint:device[,metadevice]:fs_options -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique.
When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the current boot environment contains two file systems, root (/) and /home. In the new boot environment, the root (/) file system is split into two file systems, root (/) and /usr. The /home file system is copied to the new boot environment. A description, mydescription, is associated with the boot environment name second_disk.
# lucreate -A 'mydescription' -c first_disk \ -m /:/dev/dsk/c0t4d0s0:ufs -m /usr:/dev/dsk/c0t4d0s3:ufs \ -m /home:/dev/dsk/c0t4d0s4:ufs -n second_disk |
The lucreate command creates a boot environment that is based on the file systems in the active boot environment. If you want to create a boot environment based on a boot environment other than the active boot environment, use lucreate with the -s option.
If you activate the new boot environment and need to fall back, you boot back to the boot environment that was last active, not the source boot environment.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Create the boot environment.
# lucreate [-A 'BE_description'] -s source_BE_name -m mountpoint:device[,metadevice]:fs_options -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Specifies the source boot environment for the new boot environment. The source would not be the active boot environment.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique on the system.
When creation of the new boot environment is complete, it can be upgraded and activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, a boot environment is created that is based on the root (/) file system in the source boot environment named third_disk. Third_disk is not the active boot environment. A description, mydescription, is associated with the new boot environment named second_disk.
# lucreate -A 'mydescription' -s third_disk \ -m /:/dev/dsk/c0t4d0s0:ufs -n second_disk |
The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When using the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems that are specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Create the empty boot environment.
# lucreate -A 'BE_name' -s - \ -m mountpoint:device[,metadevice]:fs_options -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name (BE_name). The description can be any length and can contain any characters.
Specifies that an empty boot environment be created.
Specifies the file systems' configuration of the new boot environment. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager metadevice, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/vx/dsk/volume_name
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
The name of the boot environment to be created. BE_name must be unique on the system.
In this example, a boot environment is created but contains no file systems. A description, mydescription, is associated with the new boot environment that is named second_disk.
# lucreate -A 'mydescription' -s - \ -m /:/dev/dsk/c0t1d0s0:ufs -n second_disk |
When creation of the empty boot environment is complete, a flash archive can be installed and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
For an example of creating and populating an empty boot environment, see Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive (Command-Line Interface).
The following image shows the creation of an empty boot environment.
When you create a boot environment, Solaris Live Upgrade uses Solaris Volume Manager technology to create RAID-1 volumes. When creating a boot environment, you can use Solaris Live Upgrade to manage the following tasks.
Remove a single-slice concatenation (submirror) from a RAID-1 volume (mirror). The contents can be saved to become the content of the new boot environment if necessary. Because the contents are not copied, the new boot environment can be quickly created. After the submirror is detached from a mirror, it is no longer part of the original mirror. Reads and writes to the submirror are no longer performed through the mirror.
Create a boot environment that contains a mirror.
Attach a single-slice concatenation to the newly created mirror.
To use the mirroring capabilities of Solaris Live Upgrade, you must create a state database and a state database replica. A state database stores information about disk about the state of your Solaris Volume Manager configuration.
For information about creating a state database, see Chapter 6, State Database (Overview), in Solaris Volume Manager Administration Guide.
For an overview of Solaris Volume Manager and the tasks that Solaris Live Upgrade can provide, see Creating a Boot Environment With RAID-1 Volume File Systems.
For in-depth information about complex Solaris Volume Manager configurations that are not allowed when using Solaris Live Upgrade, see Chapter 2, Storage Management Concepts, in Solaris Volume Manager Administration Guide.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To create the new boot environment, type:
# lucreate [-A 'BE_description'] \ -m mountpoint:device[,metadevice]:fs_options [-m...] \ -n BE_name |
(Optional) Enables the creation of a boot environment description that is associated with the boot environment name BE_name. The description can be any length and can contain any characters.
Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following types of file systems and keywords:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device.
mirror creates a RAID–1 volume or mirror on the specified device. In subsequent -m options, you must specify attach to attach at least one concatenation to the new mirror. The specified device must be correctly named. For example, a logical device name of /dev/md/dsk/d10 can serve as a mirror name. For more information about naming devices, see Overview of Solaris Volume Manager Components in Solaris Volume Manager Administration Guide.
detach removes a concatenation from a volume that is associated with a specified mount point. The volume does not need to be specified.
attach attaches a concatenation to the mirror that is associated with a specified mount point. The physical disk slice that is specified is made into a single device concatenation for attaching to the mirror. To specify a concatenation to attach to a disk, you append a comma and the name of that concatenation to the device name. If you omit the comma and the concatenation name, lucreate selects a free volume for the concatenation.
lucreate allows you to create only concatenations that contain a single physical slice. This command allows you to attach up to three concatenations to a mirror.
preserve saves the existing file system and its content. This keyword enables you to bypass the copying process that copies the content of the source boot environment. Saving the content enables a quick creation of the new boot environment. For a particular mount point, you can use preserve with only one physical device. When you use preserve, lucreate checks that the device's content is suitable for a specified file system. This check is limited and cannot guarantee suitability.
The preserve keyword can be used with both a physical slice and a Solaris Volume Manager volume.
If you use the preserve keyword when the UFS file system is on a physical slice, the content of the UFS file system is saved on the slice. In the following example of the -m option, the preserve keyword saves the content of the physical device c0t0d0s0 as the file system for the mount point for the root (/) file system.
-m /:/dev/dsk/c0t0d0s0:preserve,ufs |
If you use the preserve keyword when the UFS file system is on a volume, the contents of the UFS file system are saved on the volume.
In the following example of the -m option, the preserve keyword saves the contents of the RAID-1 volume (mirror) d10 as the file system for the mount point for the root (/) file system.
-m /:/dev/md/dsk/d10:preserve,ufs |
In the following example of the -m option, a RAID-1 volume (mirror) d10 is configured as the file system for the mount point for the root (/) file system. The single-slice concatenation d20 is detached from its current mirror. d20 is attached to mirror d10. The root (/) file system is preserved on submirror d20.
-m /:/dev/md/dsk/d10:mirror,ufs -m /:/dev/md/dsk/d20:detach,attach,preserve |
The name of the boot environment to be created. BE_name must be unique on the system.
When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the mount points for the file systems are specified by using the -m option.
A description, mydescription, is associated with the name another_disk.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system that is copied to the mirror d10. All data on the mirror d10 is overwritten.
Two slices, c0t0d0s0 and c0t1d0s0, are submirrors, d1 and d2. These two submirrors are added to mirror d10.
The new boot environment is named another_disk.
# lucreate -A 'mydescription' \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:attach \ -m /:/dev/dsk/c0t1c0s0,/dev/md/dsk/d2:attach -n another_disk |
In this example, the mount points for the file systems are specified by using the -m option.
A description, mydescription, is associated with the name another_disk.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system that is copied to the mirror d10. All data on the mirror d10 is overwritten.
Two slices, c0t0d0s0 and c0t1d0s0, are specified to be used as submirrors. The submirrors are not specified, but the lucreate command chooses names from a list of available volume names. These two submirrors are attached to mirror d10.
The new boot environment is named another_disk.
# lucreate -A 'mydescription' \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0:attach \ -m /:/dev/dsk/c0t1d0s0:attach -n another_disk |
When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the mount points for the file systems are specified by using the -m option.
A description, mydescription, is associated with the name another_disk.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created.
Slice c0t0d0s0 is removed from its current mirror. The slice is specified to be submirror d1 and is added to mirror d10. The contents of the submirror, the root (/) file system, are saved and no copy occurs. Slice c0t1d0s0 is submirror d2 and is added to mirror d10.
The new boot environment is named another_disk.
# lucreate -A 'mydescription' \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:detach,attach,preserve \ -m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d2:attach -n another_disk |
This example can be abbreviated as in the following example. The physical and logical device names are shortened. The specifiers for the submirrors d1 and d2 are omitted.
# lucreate -A 'mydescription' \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \ -m /:/dev/dsk/c0t1d0s0:attach -n another_disk |
When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the mount points for the file systems are specified by using the -m option.
A description, mydescription, is associated with the name another_disk.
lucreate configures a UFS file system for the mount point root (/). A mirror, d20, is created.
Slice c0t0d0s0 is removed from its current mirror and added to the mirror d20. The name of the submirror is not specified. The contents of the submirror, the root (/) file system, are saved and no copy occurs.
The new boot environment is named another_disk.
# lucreate -A 'mydescription' \ -m /:/dev/md/dsk/d20:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0:detach,attach,preserve \ -n another_disk |
When the creation of the new boot environment is complete, the boot environment can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the mount points for the file systems are specified by using the -m option.
A description, mydescription, is associated with the name another_disk.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system that is copied to the mirror d10. All data on the mirror d10 is overwritten.
Two slices, c0t0d0s0 and c0t1d0s0, are submirrors d1 and d2. These two submirrors are added to mirror d10.
lucreate configures UFS file system for the mount point /opt. A mirror, d11, is created. This mirror is the receptacle for the current boot environment's /opt file system that is copied to the mirror d11. All data on the mirror d11 is overwritten.
Two slices, c2t0d0s1 and c3t1d0s1, are submirrors d3 and d4. These two submirrors are added to mirror d11.
The new boot environment is named another_disk.
# lucreate -A 'mydescription' \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t0d0s0,/dev/md/dsk/d1:attach \ -m /:/dev/dsk/c0t1d0s0,/dev/md/dsk/d2:attach \ -m /opt:/dev/md/dsk/d11:ufs,mirror \ -m /opt:/dev/dsk/c2t0d0s1,/dev/md/dsk/d3:attach \ -m /opt:/dev/dsk/c3t1d0s1,/dev/md/dsk/d4:attach -n another_disk |
When the creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
The content of the file system on the new boot environment can be modified by using the following options. Directories and files are not copied to the new boot environment.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To create the new boot environment, type:
# lucreate -m mountpoint:device[,metadevice]:fs_options [-m ...] \ [-x exclude_dir] [-y include] \ [-Y include_list_file] \ [-f exclude_list_file]\ [-z filter_list] [-I] -n BE_name |
Specifies the file systems' configuration of the new boot environment in the vfstab. The file systems that are specified as arguments to -m can be on the same disk or they can be spread across multiple disks. Use this option as many times as needed to create the number of file systems that are needed.
mountpoint can be any valid mount point or – (hyphen), indicating a swap partition.
device field can be one of the following:
The name of a disk device, of the form /dev/dsk/cwtxdysz
The name of a Solaris Volume Manager volume, of the form /dev/md/dsk/dnum
The name of a Veritas Volume Manager volume, of the form /dev/md/vxfs/dsk/dnum
The keyword merged, indicating that the file system at the specified mount point is to be merged with its parent
fs_options field can be one of the following:
ufs, which indicates a UFS file system.
vxfs, which indicates a Veritas file system.
swap, which indicates a swap file system. The swap mount point must be a – (hyphen).
For file systems that are logical devices (mirrors), several keywords specify actions to be applied to the file systems. These keywords can create a logical device, change the configuration of a logical device, or delete a logical device. For a description of these keywords, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
Excludes files and directories by not copying them to the new boot environment. You can use multiple instances of this option to exclude more than one file or directory.
exclude_dir is the name of the directory or file.
Copies directories and files that are listed to the new boot environment. This option is used when you have excluded a directory, but want to restore individual subdirectories or files.
include_dir is the name of the subdirectory or file to be included.
Copies directories and files from a list to the new boot environment. This option is used when you have excluded a directory, but want to restore individual subdirectories or files.
list_filename is the full path to a file that contains a list.
The list_filename file must contain one file per line.
If a line item is a directory, all subdirectories and files beneath that directory are included. If a line item is a file, only that file is included.
Uses a list to exclude directories and files by not copying them to the new boot environment.
list_filename is the full path to a file that contains a list.
The list_filename file must contain one file per line.
Uses a list to copy directories and files to the new boot environment. Each file or directory in the list is noted with a plus “+” or minus “-”. A plus indicates an included file or directory and the minus indicates an excluded file or directory.
list_filename is the full path to a file that contains a list.
The list_filename file must contain one file per line. A space must follow the plus or minus before the file name.
If a line item is a directory and is indicated with a + (plus), all subdirectories and files beneath that directory are included. If a line item is a file and is indicated with a + (plus), only that file is included.
Overrides the integrity check of system files. Use this option with caution.
To prevent you from removing important system files from a boot environment, lucreate runs an integrity check. This check examines all files that are registered in the system package database and stops the boot environment creation if any files are excluded. Use of this option overrides this integrity check. This option creates the boot environment more quickly, but might not detect problems.
The name of the boot environment to be created. BE_name must be unique on the system.
When creation of the new boot environment is complete, it can be upgraded and can be activated (made bootable). See Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).
In this example, the new boot environment is named second_disk. The source boot environment contains one file system, root (/). In the new boot environment, the /var file system is split from the root (/) file system and put on another slice. The lucreate command configures a UFS file system for the mount points root (/) and /var. Also, two /var mail files, root and staff are not copied to the new boot environment. Swap is automatically shared between the source and the new boot environment.
# lucreate -n second_disk \ -m /:/dev/dsk/c0t1d0s0:ufs -m /var/mail:/dev/dsk/c0t2d0s0:ufs \ -x /var/mail/root -x /var/mail/staff |
In this example, the new boot environment is named second_disk. The source boot environment contains one file system for the OS, root (/). The source also contains a file system that is named /mystuff. lucreate configures a UFS file system for the mount points root (/) and /mystuff. Only two directories in /mystuff are copied to the new boot environment: /latest and /backup. Swap is automatically shared between the source and the new boot environment.
# lucreate -n second_disk \ -m /:/dev/dsk/c01t0d0s0:ufs -m /mystuff:/dev/dsk/c1t1d0s0:ufs \ -x /mystuff -y /mystuff/latest -y /mystuff/backup |
This chapter explains how to use Solaris Live Upgrade to upgrade and activate an inactive boot environment. This chapter contains the following sections:
You can use Solaris Live Upgrade with menus or by using the command-line interface (CLI). Procedures are documented for both interfaces. These procedures do not exhaust the possibilities for using Solaris Live Upgrade. For more information about commands, see Chapter 10, Solaris Live Upgrade (Command Reference) and the appropriate man pages, which more fully document CLI options.
Task |
Description |
For Instructions |
---|---|---|
Either upgrade a boot environment or install a Solaris Flash archive. |
| |
Activate an inactive boot environment. |
Makes changes effective and switches the inactive boot environment to active . | |
(optional) Switch back if a failure occurs when activating. |
Reactivates to the original boot environment if a failure occurs. |
Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks) |
Use the Upgrade menu or luupgrade command to upgrade a boot environment. This section provides the procedure for upgrading an inactive boot environment from files that are located on the following media:
NFS server
Local file
Local tape
Local device, including DVD or CD
When you upgrade a boot environment with the latest OS, you do not affect the active boot environment. The new files merge with the inactive boot environment critical file systems, but shareable file systems are not changed.
Rather than upgrading, if you have created a Solaris Flash archive, you could install the archive on an inactive boot environment. The new files overwrite critical file systems of the inactive boot environment, but shareable file systems are not changed. See Installing Solaris Flash Archives on a Boot Environment.
You can upgrade an inactive boot environment that contains any combination of physical disk slices, Solaris Volume Manager volumes, or Veritas Volume Manager volumes. The slice that is chosen for the root (/) file system must be a single-slice concatenation that is included in a RAID–1 volume (mirror). For procedures about creating a boot environment with mirrored file systems, see To Create a Boot Environment With RAID-1 Volumes (Mirrors) (Command-Line Interface).
If VxVM volumes are configured on your current system, the lucreate command can create a new boot environment. When the data is copied to the new boot environment, the Veritas file system configuration is lost and a UFS file system is created on the new boot environment.
You can use Solaris Live Upgrade to add patches and packages to a system. Solaris Live Upgrade creates a copy of the currently running system. This new boot environment can be upgraded or you can add packages or patches. When you use Solaris Live Upgrade, the only downtime the system incurs is that of a reboot. You can add patches and packages to a new boot environment with the luupgrade command.
When adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or alter the active boot environment during an upgrade.
For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).
Type of Installation |
Description |
For More Information |
---|---|---|
Adding patches to a boot environment. |
Create a new boot environment and use the luupgrade command with the -t option. |
To Add Patches to an Operating System Image on a Boot Environment (Command-Line Interface) |
Adding packages to a boot environment. |
Use the luupgrade command with the -p option. |
To Add Packages to an Operating System Image on a Boot Environment (Command-Line Interface) |
To upgrade by using this procedure, you must use a DVD or a combined installation image. For an installation with CDs, you must use the procedure To Upgrade an Operating System Image From Multiple CDs (Command-Line Interface).
This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.
From the Solaris Live Upgrade main menu, select Upgrade.
The Upgrade menu screen is displayed.
Type the new boot environment's name.
Type the path to where the Solaris installation image is located.
Installation Media Type |
Description |
---|---|
Network File System |
Specify the path to the network file system where the installation image is located. |
Local file |
Specify the path to the local file system where the installation image is located. |
Local tape |
Specify the local tape device and the position on the tape where the installation image is located. |
Local device, DVD, or CD |
Specify the local device and the path to the installation image. |
SPARC: If you are using a DVD or a CD, type the path to that disc, as in this example:
/cdrom/cdrom0/s0/Solaris_10/s0 |
If you have a combined image on the network, type the path to the network file system as in this example:
/net/installmachine/export/Solaris_10/os_image |
To upgrade, press F3.
When the upgrade is completed, the main menu is displayed.
To upgrade by using this procedure, you must use a DVD or a combined installation image. If the installation requires more than one CD, you must use the procedure To Upgrade an Operating System Image From Multiple CDs (Command-Line Interface).
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Indicate the boot environment to upgrade and the path to the installation software by typing:
# luupgrade -u -n BE_name -s os_image_path |
Upgrades an operating system image on a boot environment
Specifies the name of the boot environment that is to be upgraded
Specifies the path name of a directory that contains an operating system image
In this example, the second_disk boot environment is upgraded by using DVD media. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu # luupgrade -u -n second_disk -s /cdrom/cdrom0/s0 |
In this example, the second_disk boot environment is upgraded. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu# luupgrade -u -n second_disk \ -s /net/installmachine/export/Solaris_10/OS_image |
Because the operating system image resides on more than one CD, you must use this upgrade procedure. Use the luupgrade command with the -i option to install any additional CDs.
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Indicate the boot environment to upgrade and the path to the installation software by typing:
# luupgrade -u -n BE_name -s os_image_path |
Upgrades an operating system image on a boot environment
Specifies the name of the boot environment that is to be upgraded
Specifies the path name of a directory that contains an operating system image
When the installer is finished with the contents of the first CD, insert the second CD.
This step is identical to the previous step, but the -u option is replaced by the -i option. Also, choose to run the installer on the second CD with menus or with text.
This command runs the installer on the second CD with menus.
# luupgrade -i -n BE_name -s os_image_path |
This command runs the installer on the second CD with text and requires no user interaction.
# luupgrade -i -n BE_name -s os_image_path -O '-nodisplay -noconsole' |
Installs additional CDs. The software looks for an installation program on the specified medium and runs that program. The installer program is specified with -s.
Specifies the name of the boot environment that is to be upgraded.
Specifies the path name of a directory that contains an operating system image.
(Optional) Runs the installer on the second CD in text mode and requires no user interaction.
Repeat Step 4 and Step 5 for each CD that you want to install.
The boot environment is ready to be activated. See Activating a Boot Environment.
In this example, the second_disk boot environment is upgraded and the installation image is on two CDs: the Solaris Software - 1 and the Solaris Software - 2 CDs. The -u option determines if sufficient space for all the packages is on the CD set. The -O option with the -nodisplay and -noconsole options prevents the character user interface from displaying after the reading of the second CD. If you use these options, you are not prompted to type information. Omit these options to display the interface.
Install the Solaris Live Upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu |
Insert the Solaris Software - 1 CD and type:
For SPARC based systems:
# luupgrade -u -n second_disk -s /cdrom/cdrom0/s0 |
For x86 based systems:
# luupgrade -u -n second_disk -s /cdrom/cdrom0/ |
Insert the Solaris Software - 2 CD and type the following.
# luupgrade -i -n second_disk -s /cdrom/cdrom0 -O '-nodisplay \ -noconsole' Repeat this step for each CD that you need. |
Repeat the previous step for each CD that you want to install.
In the following procedure, packages are removed from and added to a new boot environment.
When you are upgrading. adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or can alter the active boot environment.
For more information about packaging requirements, see Appendix B, Additional SVR4 Packaging Requirements (Reference).
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To remove a package or set of packages from a new boot environment, type:
# luupgrade -P -n second_disk package-name |
Indicates to remove the named package or packages from the boot environment
Specifies the name of the boot environment where the package is to be removed
Specifies the names of the packages to be removed. Separate multiple package names with spaces.
To add a package or a set of packages to the new boot environment, type:
# luupgrade -p -n second_disk -s /path-to-packages package-name |
Indicates to add packages to the boot environment.
Specifies the name of the boot environment where the package is to be added.
Specifies the path to a directory that contains the package or packages that are to be added.
Specifies the names of the package or packages to be added. Separate multiple package names with a space.
In this example, packages are removed then added to the second_disk boot environment.
# luupgrade -P -n second_disk SUNWabc SUNWdef SUNWghi # luupgrade -p -n second_disk -s /net/installmachine/export/packages \ SUNWijk SUNWlmn SUNWpkr |
In the following procedure, patches are removed from and added to a new boot environment.
When you are adding and removing packages or patches, Solaris Live Upgrade requires packages or patches that comply with the SVR4 advanced packaging guidelines. While Sun packages conform to these guidelines, Sun cannot guarantee the conformance of packages from third-party vendors. If a package violates these guidelines, the package can cause the package-addition software to fail or can alter the active boot environment.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To remove a patch or set of patches from a new boot environment, type:
# luupgrade -T -n second_disk patch_name |
Indicates to remove the named patch or patches from the boot environment.
Specifies the name of the boot environment where the patch or patches are to be removed.
Specifies the names of the patches to be removed. Separate multiple patch names with spaces.
To add a patch or a set of patches to the new boot environment, type the following command.
# luupgrade -t -n second_disk -s /path-to-patches patch-name |
Indicates to add patches to the boot environment.
Specifies the name of the boot environment where the patch is to be added.
Specifies the path to the directory that contains the patches that are to be added.
Specifies the names of the patch or patches that are to be added. Separate multiple patch names with a space.
In this example, patches are removed then added to the second_disk boot environment .
# luupgrade -T -n second_disk 222222-01 # luupgrade -t -n second_disk -s /net/installmachine/export/packages \ 333333-01 4444444-01 |
The follow procedure checks the integrity of the packages installed on the new boot environment.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To check the integrity of the newly installed packages on the new boot environment, type:
# luupgrade -C -n second_disk -O "-v" package-name |
Indicates to run the pkgchk command on the named packages
Specifies the name of the boot environment where the check is to be performed
Passes the options directly to the pkgchk command
Specifies the names of the packages to be checked. Separate multiple package names with spaces. If package names are omitted, the check is done on all packages in the specified boot environment.
Specifies to run the command in verbose mode
In this example, the packages SUNWabc, SUNWdef, and SUNWghi are checked to make sure they were installed properly and are not damaged.
# luupgrade -C -n second_disk SUNWabc SUNWdef SUNWghi |
You can create a JumpStart profile to use with Solaris Live Upgrade. If you are familiar with the custom JumpStart program, this is the same profile that custom JumpStart uses. The following procedures enable you to create a profile, test the profile, and install by using the luupgrade command with the -j option.
When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:
archive_location keyword in a profile
luupgrade command with -s, -a, -j, and -J options
For more information see the following:
To Upgrade With a Profile by Using Solaris Live Upgrade (Command-Line Interface)
For creating a JumpStart profile, see Creating a Profile in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations
This procedure shows you how to create a profile for use with Solaris Live Upgrade. You can use this profile to upgrade an inactive boot environment by using the luupgrade command with the -j option.
For procedures to use this profile, see the following sections:
For an upgrade with a profile, see To Upgrade With a Profile by Using Solaris Live Upgrade (Command-Line Interface).
For a Solaris Flash installation with a profile, see To Install a Solaris Flash Archive With a Profile (Command-Line Interface).
Use a text editor to create a text file.
Name the file descriptively. Ensure that the name of the profile reflects how you intend to use the profile to install the Solaris software on a system. For example, you might name this profile upgrade_Solaris_10.
Add profile keywords and values to the profile.
Only the upgrade keywords in the following tables can be used in a Solaris Live Upgrade profile.
The following table lists the keywords you can use with the Install_type keyword values of upgrade or flash_install.
Keywords for an Initial Archive Creation |
Description |
Reference |
---|---|---|
(Required) Install_type |
Defines whether to upgrade the existing Solaris environment on a system or install a Solaris Flash archive on the system. Use the following values with this keyword:
|
For a description of all the values for this keyword, see install_type Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Required for a Solaris Flash archive) archive_location |
Retrieves a Solaris Flash archive from a designated location. |
For a list of values that can be used with this keyword, see archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) cluster (adding or deleting clusters) |
Designates whether a cluster is to be added or deleted from the software group that is to be installed on the system. |
For a list of values that can be used with this keyword, see cluster Profile Keyword (Adding Software Groups) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) geo |
Designates the regional locale or locales that you want to install on a system or to add when upgrading a system. |
For a list of values that can be used with this keyword, see geo Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) local_customization |
Before you install a Solaris Flash archive on a clone system, you can create custom scripts to preserve local configurations on the clone system. The local_customization keyword designates the directory where you have stored these scripts. The value is the path to the script on the clone system. |
For information about predeployment and postdeployment scripts, see Creating Customization Scripts in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation). |
(Optional) locale |
Designates the locale packages you want to install or add when upgrading. |
For a list of values that can be used with this keyword, see locale Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) package |
Designates whether a package is to be added to or deleted from the software group that is to be installed on the system. |
For a list of values that can be used with this keyword, see package Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
The following table lists the keywords you can use with the Install_type keyword value flash_update.
Keywords for a Differential Archive Creation |
Description |
Reference |
---|---|---|
(Required) Install_type |
Defines the installation to install a Solaris Flash archive on the system. The value for a differential archive is flash_update. |
For a description of all the values for this keyword, see install_type Profile Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Required) archive_location |
Retrieves a Solaris Flash archive from a designated location. |
For a list of values that can be used with this keyword, see archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) forced_deployment |
Forces the installation of a Solaris Flash differential archive onto a clone system that is different than the software expects. If you use forced_deployment, all new files are deleted to bring the clone system to the expected state. If you are not certain that you want files to be deleted, use the default, which protects new files by stopping the installation. |
For more information about this keyword, see forced_deployment Profile Keyword (Installing Solaris Flash Differential Archives) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) local_customization |
Before you install a Solaris Flash archive on a clone system, you can create custom scripts to preserve local configurations on the clone system. The local_customization keyword designates the directory where you have stored these scripts. The value is the path to the script on the clone system. |
For information about predeployment and postdeployment scripts, see Creating Customization Scripts in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation). |
(Optional) no_content_check |
When installing a clone system with a Solaris Flash differential archive, you can use the no_content_check keyword to ignore file-by-file validation. File-by-file validation ensures that the clone system is a duplicate of the master system. Avoid using this keyword unless you are sure the clone system is a duplicate of the original master system. |
For more information about this keyword, see no_content_check Profile Keyword (Installing Solaris Flash Archives) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
(Optional) no_master_check |
When installing a clone system with a Solaris Flash differential archive, you can use the no_master_check keyword to ignore a check of files. Clone system files are not checked. A check would ensure the clone was built from the original master system. Avoid using this keyword unless you are sure the clone system is a duplicate of the original master system. |
For more information about this keyword, see no_master_check Profile Keyword (Installing Solaris Flash Archives) in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations. |
Save the profile in a directory on the local system.
Ensure that root owns the profile and that the permissions are set to 644.
Test the profile (optional).
For a procedure to test the profile, see To Test a Profile to Be Used by Solaris Live Upgrade.
In this example, a profile provides the upgrade parameters. This profile is to be used to upgrade an inactive boot environment with the Solaris Live Upgrade luupgrade command and the -u and -j options. This profile adds a package and a cluster. A regional locale and additional locales are also added to the profile. If you add locales to the profile, make sure that you have created a boot environment with additional disk space.
# profile keywords profile values # ---------------- ------------------- install_type upgrade package SUNWxwman add cluster SUNWCacc add geo C_Europe locale zh_TW locale zh_TW.BIG5 locale zh_TW.UTF-8 locale zh_HK.UTF-8 locale zh_HK.BIG5HK locale zh locale zh_CN.GB18030 locale zh_CN.GBK locale zh_CN.UTF-8 |
The following example of a profile is to be used by Solaris Live Upgrade to install a differential archive on a clone system. Only files that are specified by the differential archive are added, deleted, or changed. The Solaris Flash archive is retrieved from an NFS server. Because the image was built by the original master system, the clone system is not checked for a valid system image. This profile is to be used with the Solaris Live Upgrade luupgrade command and the -u and -j options.
# profile keywords profile values # ---------------- ------------------- install_type flash_update archive_location nfs installserver:/export/solaris/archive/solarisarchive no_master_check
To use the luupgrade command to install the differential archive, see To Install a Solaris Flash Archive With a Profile (Command-Line Interface).
After you create a profile, use the luupgrade command to test the profile. By looking at the installation output that is generated by luupgrade, you can quickly determine if a profile works as you intended.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Test the profile.
# luupgrade -u -n BE_name -D -s os_image_path -j profile_path |
Upgrades an operating system image on a boot environment.
Specifies the name of the boot environment that is to be upgraded.
luupgrade command uses the selected boot environment's disk configuration to test the profile options that are passed with the -j option.
Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.
Path to a profile that is configured for an upgrade. The profile must be in a directory on the local machine.
In the following example, the profile is named Flash_profile. The profile is successfully tested on the inactive boot environment that is named second_disk.
# luupgrade -u -n u1b08 -D -s /net/installsvr/export/u1/combined.u1wos \ -j /var/tmp/flash_profile Validating the contents of the media /net/installsvr/export/u1/combined.u1wos. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains Solaris version 10. Locating upgrade profile template to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE second_disk. Determining packages to install or upgrade for BE second_disk. Simulating the operating system upgrade of the BE second_disk. The operating system upgrade simulation is complete. INFORMATION: var/sadm/system/data/upgrade_cleanup contains a log of the upgrade operation. INFORMATION: var/sadm/system/data/upgrade_cleanup contains a log of cleanup operations required. The Solaris upgrade of the boot environment second_disk is complete. |
You can now use the profile to upgrade an inactive boot environment.
This procedure provides step-by-step instructions for upgrading an OS by using a profile.
If you want to install a Solaris Flash archive by using a profile, see To Install a Solaris Flash Archive With a Profile (Command-Line Interface).
If you added locales to the profile, make sure that you have created a boot environment with additional disk space.
When you install the Solaris OS with a Solaris Flash archive, the archive and the installation media must contain identical OS versions. For example, if the archive is the Solaris 10 operating system and you are using DVD media, then you must use Solaris 10 DVD media to install the archive. If the OS versions do not match, the installation on the target system fails. Identical operating systems are necessary when you use the following keyword or command:
archive_location keyword in a profile
luupgrade command with -s, -a, -j, and -J options
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Create a profile.
See To Create a Profile to be Used by Solaris Live Upgrade for a list of upgrade keywords that can be used in a Solaris Live Upgrade profile.
Type:
# luupgrade -u -n BE_name -s os_image_path -j profile_path |
Upgrades an operating system image on a boot environment.
Specifies the name of the boot environment that is to be upgraded.
Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.
Path to a profile. The profile must be in a directory on the local machine. For information about creating a profile, see To Create a Profile to be Used by Solaris Live Upgrade.
The boot environment is ready to be activated.
In this example, the second_disk boot environment is upgraded by using a profile. The -j option is used to access the profile. The boot environment is then ready to be activated. To create a profile, see To Create a Profile to be Used by Solaris Live Upgrade. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu # luupgrade -u -n second_disk \ -s /net/installmachine/export/solarisX/OS_image \ -j /var/tmp/profile |
This section provides the procedure for using Solaris Live Upgrade to install Solaris Flash archives. Installing a Solaris Flash archive overwrites all files on the new boot environment except for shared files. Archives are stored on the following media:
HTTP server
FTP server – Use this path from the command line only
NFS server
Local file
Local tape
Local device, including DVD or CD
Note the following issues with installing and creating a Solaris Flash archive.
Description |
For More Information |
---|---|
For examples of the correct syntax for paths that are associated with archive storage. | |
To use the Solaris Flash installation feature, you install a master system and create the Solaris Flash archive. |
For more information about creating an archive, see Chapter 3, Creating Solaris Flash Archives (Tasks), in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation). |
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
From the Solaris Live Upgrade main menu, select Flash.
The Flash an Inactive Boot Environment menu is displayed.
Type the name of the boot environment where you want to install the Solaris Flash archive and the location of the installation media:
Name of Boot Environment: Solaris_10 Package media: /net/install-svr/export/Solaris_10/latest |
Press F1 to add an archive.
An Archive Selection submenu is displayed.
Location - Retrieval Method <No Archives added> - Select ADD to add archives |
This menu enables you to build a list of archives. To add or remove archives, proceed with the following steps.
To add an archive to the menu, press F1.
A Select Retrieval Method submenu is displayed.
HTTP NFS Local File Local Tape Local Device |
On the Select Retrieval Method menu, select the location of the Solaris Flash archive.
Media Selected |
Prompt |
---|---|
HTTP |
Specify the URL and proxy information that is needed to access the Solaris Flash archive. |
NFS |
Specify the path to the network file system where the Solaris Flash archive is located. You can also specify the archive file name. |
Local file |
Specify the path to the local file system where the Solaris Flash archive is located. |
Local tape |
Specify the local tape device and the position on the tape where the Solaris Flash archive is located. |
Local device |
Specify the local device, the path to the Solaris Flash archive, and the type of file system on which the Solaris Flash archive is located. |
A Retrieval submenu is displayed, similar to the following example, which depends on the media you selected.
NFS Location: |
Type the path to the archive, as in the following example.
NFS Location: host:/path/to archive.flar |
Press F3 to add the archive to the list.
(Optional) To remove an archive from the menu, press F2.
When the list contains the archives that you want to install, press F6 to exit.
Press F3 to install one or more archives.
The Solaris Flash archive is installed on the boot environment. All files on the boot environment are overwritten, except for shareable files.
The boot environment is ready for activation. See To Activate a Boot Environment (Character User Interface).
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# luupgrade -f -n BE_name -s os_image_path -a archive |
Indicates to install an operating system from a Solaris Flash archive.
Specifies the name of the boot environment that is to be installed with an archive.
Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.
Path to the Solaris Flash archive when the archive is available on the local file system. The operating system image versions that are specified with the -s option and the -a option must be identical.
In this example, an archive is installed on the second_disk boot environment. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris10 11/06 releases. All files are overwritten on second_disk except shareable files. The pkgadd command adds the Solaris Live upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu # luupgrade -f -n second_disk \ -s /net/installmachine/export/Solaris_10/OS_image \ -a /net/server/archive/10 |
The boot environment is ready to be activated.
This procedure provides the steps to install a Solaris Flash archive or differential archive by using a profile.
If you added locales to the profile, make sure that you have created a boot environment with additional disk space.
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Create a profile.
See To Create a Profile to be Used by Solaris Live Upgrade for a list of keywords that can be used in a Solaris Live Upgrade profile.
Type:
# luupgrade -f -n BE_name -s os_image_path -j profile_path |
Indicates to install an operating system from a Solaris Flash archive.
Specifies the name of the boot environment that is to be upgraded.
Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.
Path to a JumpStart profile that is configured for a flash installation. The profile must be in a directory on the local machine. The -s option's operating system version and the Solaris Flash archive operating system version must be identical.
The boot environment is ready to be activated.
In this example, a profile provides the location of the archive to be installed.
# profile keywords profile values # ---------------- ------------------- install_type flash_install archive_location nfs installserver:/export/solaris/flasharchive/solarisarchive
After creating the profile, you can run the luupgrade command and install the archive. The -j option is used to access the profile. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu # luupgrade -f -n second_disk \ -s /net/installmachine/export/solarisX/OS_image \ -j /var/tmp/profile |
The boot environment is then ready to be activated. To create a profile, see To Create a Profile to be Used by Solaris Live Upgrade.
This procedure enables you to install a Solaris Flash archive and use the archive_location keyword at the command line rather than from a profile file. You can quickly retrieve an archive without the use of a profile file.
Install the Solaris Live Upgrade SUNWlur and SUNWluu packages on your system. These packages must be from the release you are upgrading to. For step-by-step procedures, see To Install Solaris Live Upgrade With the pkgadd Command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# luupgrade -f -n BE_name -s os_image_path -J 'archive_location path-to-profile' |
Specifies to upgrade an operating system from a Solaris Flash archive.
Specifies the name of the boot environment that is to be upgraded.
Specifies the path name of a directory that contains an operating system image. This directory can be on an installation medium, such as a DVD-ROM, CD-ROM, or it can be an NFS or UFS directory.
Specifies the archive_location profile keyword and the path to the JumpStart profile. The -s option's operating system version and the Solaris Flash archive operating system version must be identical. For the keyword values, see archive_location Keyword in Solaris 10 11/06 Installation Guide: Custom JumpStart and Advanced Installations.
The boot environment is ready to be activated.
In this example, an archive is installed on the second_disk boot environment. The -J option and the archive_location keywords are used to retrieve the archive. All files are overwritten on second_disk except shareable files. The pkgadd command adds the Solaris Live Upgrade packages from the release you are upgrading to.
# pkgadd -d /server/packages SUNWlur SUNWluu # luupgrade -f -n second_disk \ -s /net/installmachine/export/solarisX/OS_image \ -J 'archive_location http://example.com/myflash.flar' |
Activating a boot environment makes it bootable on the next reboot of the system. You can also switch back quickly to the original boot environment if a failure occurs on booting the newly active boot environment. See Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).
Description |
For More Information |
---|---|
Use this procedure to activate a boot environment and use a character user interface (CUI). Note – The first time you activate a boot environment, the Activate menu or the luactivate command must be used. | |
Use this procedure to activate a boot environment with the luactivate command. Note – The first time you activate a boot environment, the Activate menu or the luactivate command must be used. | |
Use this procedure to activate a boot environment and force a synchronization of files. Note – Files are synchronized with the first activation. If you switch boot environments after the first activation, files are not synchronized. |
To Activate a Boot Environment and Synchronize Files (Command-Line Interface) |
x86: Use this procedure to activate a boot environment with the GRUB menu. Note – A GRUB menu can facilitate switching from one boot environment to another. A boot environment appears in the GRUB menu after the first activation. |
x86: To Activate a Boot Environment With the GRUB Menu (Command-Line Interface) |
To successfully activate a boot environment, that boot environment must meet the following conditions:
Description |
For More Information |
---|---|
The boot environment must have a status of “complete.” |
To check status, see Displaying the Status of All Boot Environments. |
If the boot environment is not the current boot environment, you cannot have mounted the partitions of that boot environment by using the luumount or mount commands. |
To view man pages, see lumount(1M) or mount(1M). |
The boot environment that you want to activate cannot be involved in a comparison operation. |
For procedures, see Comparing Boot Environments. |
If you want to reconfigure swap, make this change prior to booting the inactive boot environment. By default, all boot environments share the same swap devices. |
To reconfigure swap, see one of the following procedures:
|
If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:
If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command or the Activate menu. These older boot environments do not display on the GRUB menu.
The first time you activate a boot environment, you must use the luactivate command or the Activate menu. The next time you boot, that boot environment's name is displayed in the GRUB main menu. You can thereafter switch to this boot environment by selecting the appropriate entry in the GRUB menu.
See x86: Activating a Boot Environment With the GRUB Menu.
The first time you boot from a newly created boot environment, Solaris Live Upgrade software synchronizes the new boot environment with the boot environment that was last active. “Synchronize” means that certain critical system files and directories are copied from the last-active boot environment to the boot environment being booted. Solaris Live Upgrade does not perform this synchronization after this initial boot unless you request to do so when prompted to force a synchronization.
For more information about synchronization, see Synchronizing Files Between Boot Environments.
If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:
If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command or the Activate menu. These older boot environments do not display on the GRUB menu.
The first time you activate a boot environment, you must use the luactivate command or the Activate menu. The next time you boot, that boot environment's name is displayed in the GRUB main menu. You can thereafter switch to this boot environment by selecting the appropriate entry in the GRUB menu.
See x86: Activating a Boot Environment With the GRUB Menu.
From the Solaris Live Upgrade main menu, select Activate.
Type the name of the boot environment to make active:
Name of Boot Environment: Solaris_10 Do you want to force a Live Upgrade sync operations: no |
You can either continue or force a synchronization of files.
Press Return to continue.
The first time that the boot environment is booted, files are automatically synchronized.
You can force a synchronization of files, but use this feature with caution. Operating systems on each boot environment must be compatible with files that are being synchronized. To force a synchronization of files, type:
Do you want to force a Live Upgrade sync operations: yes |
Use a forced synchronization with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.
Press F3 to begin the activation process.
Press Return to continue.
The new boot environment is activated at the next reboot.
To activate the inactive boot environment, reboot:
# init 6 |
The following procedure switches a new boot environment to become the currently running boot environment.
If you have an x86 based system, you can also activate with the GRUB menu. Note the following exceptions:
If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command or the Activate menu. These older boot environments do not display on the GRUB menu.
The first time you activate a boot environment, you must use the luactivate command or the Activate menu. The next time you boot, that boot environment's name is displayed in the GRUB main menu. You can thereafter switch to this boot environment by selecting the appropriate entry in the GRUB menu.
See x86: Activating a Boot Environment With the GRUB Menu.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To activate the boot environment, type:
# /sbin/luactivate BE_name |
Specifies the name of the boot environment that is to be activated
Reboot.
# init 6 |
Use only the init or shutdown commands to reboot. If you use the reboot, halt, or uadmin commands, the system does not switch boot environments. The last-active boot environment is booted again.
In this example, the second_disk boot environment is activated at the next reboot.
# /sbin/luactivate second_disk # init 6 |
The first time you boot from a newly created boot environment, Solaris Live Upgrade software synchronizes the new boot environment with the boot environment that was last active. “Synchronize” means that certain critical system files and directories are copied from the last-active boot environment to the boot environment being booted. Solaris Live Upgrade does not perform this synchronization after the initial boot, unless you force synchronization with the luactivate command and the -s option.
When you switch between boot environments with the GRUB menu, files also are not synchronized. You must use the following procedure to synchronize files.
For more information about synchronization, see Synchronizing Files Between Boot Environments.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To activate the boot environment, type:
# /sbin/luactivate -s BE_name |
Forces a synchronization of files between the last-active boot environment and the new boot environment. The first time that a boot environment is activated, the files between the boot environment are synchronized With subsequent activations, the files are not synchronized unless you use the -s option.
Use this option with great care, because you might not be aware of or in control of changes that might have occurred in the last-active boot environment. For example, if you were running Solaris 10 11/06 software on your current boot environment and booted back to a Solaris 9 release with a forced synchronization, files could be changed on the Solaris 9 release. Because files are dependent on the release of the OS, the boot to the Solaris 9 release could fail because the Solaris 10 11/06 files might not be compatible with the Solaris 9 files.
Specifies the name of the boot environment that is to be activated.
Reboot.
# init 6 |
In this example, the second_disk boot environment is activated at the next reboot and the files are synchronized.
# /sbin/luactivate -s second_disk # init 6 |
A GRUB menu provides an optional method of switching between boot environments. The GRUB menu is an alternative to activating (booting) with the luactivate command or the Activate menu. The table below notes cautions and limitations when using the GRUB menu.
Table 5–3 x86: Activating With the GRUB Menu Summary
Task |
Description |
For More Information |
---|---|---|
Caution |
After you have activated a boot environment, do not change the disk order in the BIOS. Changing the order might cause the GRUB menu to become invalid. If this problem occurs, changing the disk order back to the original state fixes the GRUB menu. | |
Activating a boot environment for the first time |
The first time you activate a boot environment, you must use the luactivate command or the Activate menu. The next time you boot, that boot environment's name is displayed in the GRUB main menu. You can thereafter switch to this boot environment by selecting the appropriate entry in the GRUB menu. | |
Synchronizing files |
The first time you activate a boot environment, files are synchronized between the current boot environment and the new boot environment. With subsequent activations, files are not synchronized. When you switch between boot environments with the GRUB menu, files also are not synchronized. You can force a synchronization when using the luactivate command with the -s option. |
To Activate a Boot Environment and Synchronize Files (Command-Line Interface) |
Boot environments created before the Solaris 10 1/06 release |
If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command or the Activate menu. These older boot environments do not display on the GRUB menu. | |
Editing or customizing the GRUB menu entries |
The menu.lst file contains the information that is displayed in the GRUB menu. You can revise this file for the following reasons:
Note – If you want to change the GRUB menu, you need to locate the menu.lst file. For step-by-step instructions, see x86: Locating the GRUB Menu's menu.lst File (Tasks). Caution – Do not use the GRUB menu.lst file to modify Solaris Live Upgrade entries. Modifications could cause Solaris Live Upgrade to fail. Although you can use the menu.lst file to customize booting behavior, the preferred method for customization is to use the eeprom command. If you use the menu.lst file to customize, the Solaris OS entries might be modified during a software upgrade. Changes to the file could be lost. |
You can switch between two boot environments with the GRUB menu. Note the following limitations:
The first activation of a boot environment must be done with the luactivate command or the Activate menu. After the initial activation, the boot environment is displayed on the GRUB menu. The boot environment can then be booted from the GRUB menu.
Caution - Switching to a boot environment with the GRUB menu bypasses synchronization. For more information about synchronizing files, see link Forcing a Synchronization Between Boot Environments.
If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment must always be activated with the luactivate command or the Activate menu. These older boot environments are not displayed on the GRUB menu.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Reboot the system.
# init 6 |
The GRUB main menu is displayed. The two operating systems are listed, Solaris and second_disk, which is a Solaris Live Upgrade boot environment. The failsafe entries are for recovery, if for some reason the primary OS does not boot.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
To activate a boot environment, use the arrow key to select the desired boot environment and press Return.
The selected boot environment is booted and becomes the active boot environment.
This chapter explains how to recover from an activation failure.
If a failure is detected after upgrading or if the application is not compatible with an upgraded component, fall back to the original boot environment by using one of the following procedures, depending on your platform.
You can fallback to the original boot environment by using three methods:
SPARC: To Fall Back Despite Successful New Boot Environment Activation
SPARC: To Fall Back From a Failed Boot Environment Activation
SPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image
Use this procedure when you have successfully activated your new boot environment, but are unhappy with the results.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /sbin/luactivate BE_name |
Specifies the name of the boot environment to be activated
Reboot.
# init 6 |
The previous working boot environment becomes the active boot environment.
If you experience a failure while booting the new boot environment and can boot the original boot environment in single-user mode, use this procedure to fall back to the original boot environment.
If you need to boot from media or a net installation image, see SPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image.
At the OK prompt, boot the machine to single-user state from the Solaris Operating System DVD, Solaris Software - 1 CD, the network, or a local disk.
OK boot device_name -s |
Specifies the name of devices from where the system can boot, for example /dev/dsk/c0t0d0s0
Type:
# /sbin/luactivate BE_name |
Specifies the name of the boot environment to be activated
If this command fails to display a prompt, proceed to SPARC: To Fall Back to the Original Boot Environment by Using a DVD, CD, or Net Installation Image.
If the prompt is displayed, continue.
At the prompt, type:
Do you want to fallback to activate boot environment <disk name> (yes or no)? yes |
A message displays that the fallback activation is successful.
Reboot.
# init 6 |
The previous working boot environment becomes the active boot environment.
Use this procedure to boot from a DVD, CD, a net installation image or another disk that can be booted. You need to mount the root (/) slice from the last-active boot environment. Then run the luactivate command, which makes the switch. When you reboot, the last-active boot environment is up and running again.
At the OK prompt, boot the machine to single-user state from the Solaris Operating System DVD, Solaris Software - 1 CD, the network, or a local disk:
OK boot cdrom -s |
or
OK boot net -s |
or
OK boot device_name -s |
Specifies the name of the disk and the slice where a copy of the operating system resides, for example /dev/dsk/c0t0d0s0
If necessary, check the integrity of the root (/) file system for the fallback boot environment.
# fsck device_name |
Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.
Mount the active boot environment root (/) slice to some directory, such as /mnt:
# mount device_name /mnt |
Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.
From the active boot environment root (/) slice, type:
# /mnt/sbin/luactivate |
luactivate activates the previous working boot environment and indicates the result.
Unmount /mnt
# umount /mnt |
Reboot.
# init 6 |
The previous working boot environment becomes the active boot environment.
To fall back to the original boot environment, choose the procedure the best fits your circumstances.
x86: To Fall Back Despite Successful New Boot Environment Activation With the GRUB Menu
x86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu
x86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu and the DVD or CD
Use this procedure when you have successfully activated your new boot environment, but are dissatisfied with the results. You can quickly switch back to the original boot environment by using the GRUB menu.
The boot environments that are being switched must be GRUB boot environments that were created with GRUB software. If a boot environment was created with the Solaris 8, 9, or 10 3/05 release, the boot environment is not a GRUB boot environment.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Reboot the system.
# init 6 |
The GRUB menu is displayed. The Solaris OS is the original boot environment. The second_disk boot environment was successfully activated and appears on the GRUB menu. The failsafe entries are for recovery if for some reason the primary entry does not boot.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
To boot to the original boot environment, use the arrow key to select the original boot environment and press Return.
# su # init 6 |
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Select the original boot environment, Solaris.
If you experience a failure while booting, use the following procedure to fall back to the original boot environment. In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.
For the Solaris 10 3/05 release, the recommended action to fall back if the previous boot environment and new boot environment were on different disks included changing the hard disk boot order in the BIOS. Starting with the Solaris 10 1/06 release, changing the BIOS disk order is unnecessary and is strongly discouraged. Changing the BIOS disk order might invalidate the GRUB menu and cause the boot environment to become unbootable. If the BIOS disk order is changed, reverting the order back to the original settings restores system functionality.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To display the GRUB menu, reboot the system.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to this procedure, x86: To Fall Back From a Failed Boot Environment Activation With the GRUB Menu and the DVD or CD.
Boot to single user mode by editing the Grub menu.
To edit the GRUB main menu, type e.
The GRUB edit menu is displayed.
root (hd0,2,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive |
Select the original boot environment's kernel entry by using the arrow keys.
To edit the boot entry, type e.
The kernel entry is displayed in the GRUB edit menu.
grub edit>kernel /boot/multiboot |
Type -s and press Enter.
The following example notes the placement of the -s option.
grub edit>kernel /boot/multiboot -s |
To begin the booting process in single user mode, type b.
If necessary, check the integrity of the root (/) file system for the fallback boot environment.
# fsck mount_ point |
A root (/) file system that is known and reliable
Mount the original boot environment root slice to some directory (such as /mnt):
# mount device_name /mnt |
Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.
From the active boot environment root slice, type:
# /mnt/sbin/luactivate |
luactivate activates the previous working boot environment and indicates the result.
Unmount /mnt.
# umount /mnt |
Reboot.
# init 6 |
The previous working boot environment becomes the active boot environment.
If you experience a failure while booting, use the following procedure to fall back to the original boot environment. In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.
For the Solaris 10 3/05 release, the recommended action to fall back if the previous boot environment and new boot environment were on different disks included changing the hard disk boot order in the BIOS. Starting with the Solaris 10 1/06 release, changing the BIOS disk order is unnecessary and is strongly discouraged. Changing the BIOS disk order might invalidate the GRUB menu and cause the boot environment to become unbootable. If the BIOS disk order is changed, reverting the order back to the original settings restores system functionality.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.
Boot from the DVD or CD.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Boot to single user mode by editing the Grub menu.
To edit the GRUB main menu, type e.
The GRUB edit menu is displayed.
root (hd0,2,a) kernel /platform/i86pc/multiboot module /platform/i86pc/boot_archive |
Select the original boot environment's kernel entry by using the arrow keys.
To edit the boot entry, type e.
The kernel entry is displayed in an editor.
grub edit>kernel /boot/multiboot |
Type -s and press Enter.
The following example notes the placement of the -s option.
grub edit>kernel /boot/multiboot -s |
To begin the booting process in single user mode, type b.
If necessary, check the integrity of the root (/) file system for the fallback boot environment.
# fsck mount_ point |
A root (/) file system that is known and reliable
Mount the original boot environment root slice to some directory (such as /mnt):
# mount device_name /mnt |
Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.
From the active boot environment root slice, type:
# /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes |
luactivate activates the previous working boot environment and indicates the result.
Unmount /mnt.
# umount device_name |
Specifies the location of the root (/) file system on the disk device of the boot environment you want to fall back to. The device name is entered in the form of /dev/dsk/cwtxdysz.
Reboot.
# init 6 |
The previous working boot environment becomes the active boot environment.
This chapter explains various maintenance tasks such as keeping a boot environment file system up to date or deleting a boot environment. This chapter contains the following sections:
Use the Status menu or the lustatus command to display the information about the boot environment. If no boot environment is specified, the status information for all boot environments on the system is displayed.
The following details for each boot environment are displayed:
Name – Name of each boot environment.
Complete – Indicates that no copy or create operations are in progress. Also, the boot environment can be booted. Any current activity or failure in a create or upgrade operation causes a boot environment to be incomplete. For example, if a copy operation is in process or scheduled for a boot environment, that boot environment is considered incomplete.
Active – Indicates if this is the active boot environment.
ActiveOnReboot – Indicates if the boot environment becomes active on next reboot of the system.
CopyStatus – Indicates if the creation or copy of the boot environment is scheduled, active, or in the process of being upgraded. A status of SCHEDULED prevents you from performing live upgrade copy, rename, or upgrade operations.
From the main menu, select Status.
A table that is similar to the following is displayed:
boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ disk_a_S9 yes yes yes no - disk_b_S10database yes no no yes COPYING disk_b_S9a no no no yes - |
In this example, you could not perform copy, rename, or upgrade operations on disk_b_S9a because it is not complete, nor on disk_b_S10database, because a live upgrade operation is in progress.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lustatus BE_name |
Specifies the name of the inactive boot environment to view status. If BE_name is omitted, lustatus displays status for all boot environments in the system.
In this example, the status for all boot environments is displayed.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ disk_a_S9 yes yes yes no - disk_b_S10database yes no no yes COPYING disk_b_S9a no no no yes - |
You could not perform copy, rename, or upgrade operations on disk_b_S9a because it is not complete, nor on disk_b_S10database because a live upgrade operation is in progress.
You can update the contents of a previously configured boot environment with the Copy menu or the lumake command. File Systems from the active (source) boot environment are copied to the target boot environment. The data on the target is also destroyed. A boot environment must have the status “complete” before you can copy from it. See Displaying the Status of All Boot Environments to determine a boot environment's status.
The copy job can be scheduled for a later time, and only one job can be scheduled at a time. To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.
From the main menu, select Copy.
Type the name of the inactive boot environment to update:
Name of Target Boot Environment: solaris8 |
Continue or schedule the copy to occur later:
To continue with the copy, press Return.
The inactive boot environment is updated.
To schedule the copy for later, type y, a time (by using the at command format), and the email address to which to send the results:
Do you want to schedule the copy? y Enter the time in 'at' format to schedule copy: 8:15 PM Enter the address to which the copy log should be mailed: someone@anywhere.com |
For information about time formats, see the at(1) man page.
The inactive boot environment is updated.
To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.
This procedure copies source files over outdated files on a boot environment that was previously created.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lumake -n BE_name [-s source_BE] [-t time] [-m email_address] |
Specifies the name of the boot environment that has file systems that are to be replaced.
(Optional) Specifies the name of the source boot environment that contains the file systems to be copied to the target boot environment. If you omit this option, lumake uses the current boot environment as the source.
(Optional) Set up a batch job to copy over file systems on a specified boot environment at a specified time. The time is given in the format that is specified by the man page, at(1).
(Optional) Enables you to send an email of the lumake output to a specified address on command completion. email_address is not checked. You can use this option only in conjunction with -t.
In this example, file systems from first_disk are copied to second_disk. When the job is completed, an email is sent to Joe at anywhere.com.
# lumake -n second_disk -s first_disk -m joe@anywhere.com |
The files on first_disk are copied to second_disk and email is sent for notification. To cancel a scheduled copy, see Canceling a Scheduled Create, Upgrade, or Copy Job.
A boot environment's scheduled creation, upgrade, or copy job can be canceled just prior to the time the job starts. A job can be scheduled for a specific time either in the GUI with the Create a Boot Environment, Upgrade a Boot Environment, or Copy a Boot Environment menus. In the CLI, the job can be scheduled by the lumake command. At any time, only one job can be scheduled on a system.
From the main menu, select Cancel.
To view a list of boot environments that is available for canceling, press F2.
Select the boot environment to cancel.
The job no longer executes at the time specified.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lucancel |
The job no longer executes at the time that is specified.
Use the Compare menu or lucompare to check for differences between the active boot environment and other boot environments. To make a comparison, the inactive boot environment must be in a complete state and cannot have a copy job that is pending. See Displaying the Status of All Boot Environments.
The specified boot environment cannot have any partitions that are mounted with lumount or mount.
From the main menu, select Compare.
Select either Compare to Original or Compare to an Active Boot Environment.
Press F3.
Type the names of the original (active) boot environment, the inactive boot environment, and the path to a file:
Name of Parent: solaris8 Name of Child: solaris8-1 Full Pathname of the file to Store Output: /tmp/compare |
To save to the file, press F3.
The Compare menu displays the following file attributes:
Mode.
Number of links.
Owner.
Group.
Checksum – Computes checksums only if the file in the specified boot environment matches its counterpart on the active boot environment in all of the fields that are described previously. If everything matches but the checksums differ, the differing checksums are appended to the entries for the compared files.
Size.
Existence of files in only one boot environment.
To return to the Compare menu, press F3.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/lucompare -i infile (or) -t -o outfile BE_name |
Compare files that are listed in infile. The files to be compared should have absolute file names. If the entry in the file is a directory, then comparison is recursive to the directory. Use either this option or -t, not both.
Compare only nonbinary files. This comparison uses the file(1) command on each file to determine if the file is a text file. Use either this option or -i, not both.
Redirect the output of differences to outfile.
Specifies the name of the boot environment that is compared to the active boot environment.
In this example, first_disk boot environment (source) is compared to second_disk boot environment and the results are sent to a file.
# /usr/sbin/lucompare -i /etc/lu/compare/ \ -o /var/tmp/compare.out second_disk |
Use either the Delete menu or the ludelete command to remove a boot environment. Note the following limitations.
You cannot delete the active boot environment or the boot environment that is activated on the next reboot.
The boot environment to be deleted must be complete. A complete boot environment is not participating in an operation that will change its status. Use Displaying the Status of All Boot Environments to determine a boot environment's status.
You cannot delete a boot environment that has file systems mounted with lumount.
x86 only: Starting with the Solaris 10 1/06 release, you cannot delete a boot environment that contains the active GRUB menu. Use the lumake or luupgrade commands to reuse the boot environment. To determine which boot environment contains the active GRUB menu, see x86: Locating the GRUB Menu's menu.lst File (Tasks).
From the main menu, select Delete.
Type the name of the inactive boot environment you want to delete:
Name of boot environment: solaris8 |
The inactive boot environment is deleted.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# ludelete BE_name |
Specifies the name of the inactive boot environment that is to be deleted
In this example, the boot environment, second_disk, is deleted.
# ludelete second_disk |
Use the Current menu or the lucurr command to display the name of the currently running boot environment. If no boot environments are configured on the system, the message “No Boot Environments are defined” is displayed. Note that lucurr reports only the name of the current boot environment, not the boot environment that is active on the next reboot. See Displaying the Status of All Boot Environments to determine a boot environment's status.
From the main menu, select Current.
The active boot environment's name or the message “No Boot Environments are defined” is displayed.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/lucurr |
In this example, the name of the current boot environment is displayed.
# /usr/sbin/lucurr solaris8 |
Renaming a boot environment is often useful when you upgrade the boot environment from one Solaris release to another release. For example, following an operating system upgrade, you might rename the boot environment solaris8 to solaris10.
Use the Rename menu or lurename command to change the inactive boot environment's name.
Starting with the Solaris 10 1/06 release, the GRUB menu is automatically updated when you use the Rename menu or lurename command. The updated GRUB menu displays the boot environment's name in the list of boot entries. For more information about the GRUB menu, see x86: Activating a Boot Environment With the GRUB Menu.
To determine the location of the GRUB menu's menu.lst file, see x86: Locating the GRUB Menu's menu.lst File (Tasks).
Limitation |
For Instructions |
---|---|
The name must not exceed 30 characters in length. | |
The name can consist only of alphanumeric characters and other ASCII characters that are not special to the UNIX shell. |
See the “Quoting” section of sh(1). |
The name can contain only single-byte, 8-bit characters. | |
The name must be unique on the system. | |
A boot environment must have the status “complete” before you rename it. |
See Displaying the Status of All Boot Environments to determine a boot environment's status. |
You cannot rename a boot environment that has file systems mounted with lumount or mount. |
From the main menu, select Rename.
Type the boot environment to rename and then the new name.
To save your changes, press F3.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lurename -e BE_name -n new_name |
Specifies the inactive boot environment name to be changed
Specifies the new name of the inactive boot environment
In this example, second_disk is renamed to third_disk.
# lurename -e second_disk -n third_disk |
You can associate a description with a boot environment name. The description never replaces the name. Although a boot environment name is restricted in length and characters, the description can be of any length and of any content. The description can be simple text or as complex as a gif file. You can create this description at these times:
When you create a boot environment with the lucreate command and use the -A option
After the boot environment has been created by using the ludesc command
For more information about using the -A option with lucreate |
To Create a Boot Environment for the First Time (Command-Line Interface) |
For more information about creating the description after the boot environment has been created |
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/ludesc -n BE_name 'BE_description' |
Specifies the boot environment name and the new description to be associated with the name
In this example, a boot environment description is added to a boot environment that is named second_disk. The description is text that is enclosed in single quotes.
# /usr/sbin/ludesc -n second_disk 'Solaris 10 11/06 test build' |
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/ludesc -n BE_name -f file_name |
Specifies the boot environment name
Specifies the file to be associated with a boot environment name
In this example, a boot environment description is added to a boot environment that is named second_disk. The description is contained in a gif file.
# /usr/sbin/ludesc -n second_disk -f rose.gif |
The following command returns the name of the boot environment associated with the specified description.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/ludesc -A 'BE_description' |
Specifies the description to be associated with the boot environment name.
In this example, the name of the boot environment, second_disk, is determined by using the -A option with the description.
# /usr/sbin/ludesc -A 'Solaris 10 11/06 test build' second_disk |
The following command displays the boot environment's name that is associated with a file. The file contains the description of the boot environment.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/ludesc -f file_name |
Specifies the name of the file that contains the description of the boot environment.
In this example, the name of the boot environment, second_disk, is determined by using the -f option and the name of the file that contains the description.
# /usr/sbin/ludesc -f rose.gif second_disk |
This procedure displays the description of the boot environment that is named in the command.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# /usr/sbin/ludesc -n BE_name |
Specifies the boot environment name.
In this example, the description is determined by using the -n option with the boot environment name.
# /usr/sbin/ludesc -n second_disk Solaris 10 11/06 test build |
Use the List menu or the lufslist command to list the configuration of a boot environment. The output contains the disk slice (file system), file system type, and file system size for each boot environment mount point.
From the main menu, select List.
To view the status of a boot environment, type the name.
Name of Boot Environment: solaris8 |
Press F3.
The following example displays a list.
Filesystem fstype size(Mb) Mounted on ------------------------------------------------------------------ /dev/dsk/c0t0d0s1 swap 512.11 - /dev/dsk/c0t4d0s3 ufs 3738.29 / /dev/dsk/c0t4d0s4 ufs 510.24 /opt |
To return to the List menu, press F6.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
Type:
# lufslist -n BE_name |
Specifies the name of the boot environment to view file system specifics
The following example displays a list.
Filesystem fstype size(Mb) Mounted on ------------------------------------------------------------------ /dev/dsk/c0t0d0s1 swap 512.11 - /dev/dsk/c0t4d0s3 ufs 3738.29 / /dev/dsk/c0t4d0s4 ufs 510.24 /opt |
This chapter describes updating the GRUB menu.lst file if you want to manually update the file. For example, you might want to change the default time for how fast the default OS is booted. Or, you might want to add another OS to the GRUB menu. This chapter provides several examples for finding the menu.lst file.
For background information on GRUB based booting, see Chapter 6, GRUB Based Booting for Solaris Installation, in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
You must always use the bootadm command to locate the GRUB menu's menu.lst file. The list-menu subcommand finds the active GRUB menu. The menu.lst file lists all the operating systems that are installed on a system. The contents of this file dictate the list of operating systems that is displayed on the GRUB menu.
Typically, the active GRUB menu's menu.lst file is located at /boot/grub/menu.lst. In some situations, the GRUB menu.lst file resides elsewhere. For example, in a system that uses Solaris Live Upgrade, the GRUB menu.lst file might be on a boot environment that is not the currently running boot environment. Or if you have upgraded a system with an x86 boot partition, the menu.lst file might reside in the /stubboot directory. Only the active GRUB menu.lst file is used to boot the system. In order to modify the GRUB menu that is displayed when you boot the system, the active GRUB menu.lst file must be modified. Changing any other GRUB menu.lst file has no effect on the menu that is displayed when you boot the system. To determine the location of the active GRUB menu.lst file, use the bootadm command. The list-menu subcommand displays the location of the active GRUB menu. The following procedures determine the location of the GRUB menu's menu.lst file.
For more information about the bootadm command, see bootadm(1M) man page.
In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. The Solaris OS has been booted and contains the GRUB menu.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To locate the menu.lst file, type:
# /sbin/bootadm list-menu |
The location and contents of the file are displayed.
The location for the active GRUB menu is: /boot/grub/menu.lst default 0 timeout 10 0 Solaris 1 Solaris failsafe 2 second_disk 3 second_disk failsafe |
In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. In this example, the menu.lst file does not exist in the currently running boot environment. The second_disk boot environment has been booted. The Solaris boot environment contains the GRUB menu. The Solaris boot environment is not mounted.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To locate the menu.lst file, type:
# /sbin/bootadm list-menu |
The location and contents of the file are displayed.
The location for the active GRUB menu is: /dev/dsk/device_name(not mounted) The filesystem type of the menu device is <ufs> default 0 timeout 10 0 Solaris 1 Solaris failsafe 2 second_disk 3 second_disk failsafe |
Because the file system containing the menu.lst file is not mounted, mount the file system. Specify the UFS file system and the device name.
# /usr/sbin/mount -F ufs /dev/dsk/device_name /mnt |
Where device_name specifies the location of the root (/) file system on the disk device of the boot environment that you want to mount. The device name is entered in the form of /dev/dsk/cwtxdysz. For example:
# /usr/sbin/mount -F ufs /dev/dsk/c0t1d0s0 /mnt |
You can access the GRUB menu at /mnt/boot/grub/menu.lst
Unmount the filesystem
# /usr/sbin/umount /mnt |
If you mount a boot environment or a file system of a boot environment, ensure that the file system or file systems are unmounted after use. If these file systems are not unmounted, future Solaris Live Upgrade operations on that boot environment might fail.
In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. The second_disk boot environment has been booted. The Solaris boot environment contains the GRUB menu. The Solaris boot environment is mounted at /.alt.Solaris.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To locate the menu.lst file, type:
# /sbin/bootadm list-menu |
The location and contents of the file are displayed.
The location for the active GRUB menu is: /.alt.Solaris/boot/grub/menu.lst default 0 timeout 10 0 Solaris 1 Solaris failsafe 2 second_disk 3 second_disk failsafe |
Since the boot environment containing the GRUB menu is already mounted, then you can access the menu.lst file at /.alt.Solaris/boot/grub/menu.lst.
In the following procedure, the system contains two operating systems: Solaris and a Solaris Live Upgrade boot environment, second_disk. The second_disk boot environment has been booted. Your system has been upgraded and an x86 boot partition remains. The boot partition is mounted at /stubboot and contains the GRUB menu. For an explanation of x86 boot partitions, see Partitioning Recommendations in Solaris 10 11/06 Installation Guide: Planning for Installation and Upgrade.
Become superuser or assume an equivalent role.
Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
To locate the menu.lst file, type:
# /sbin/bootadm list-menu |
The location and contents of the file are displayed.
The location for the active GRUB menu is: /stubboot/boot/grub/menu.lst default 0 timeout 10 0 Solaris 1 Solaris failsafe 2 second_disk 3 second_disk failsafe |
You can access the menu.lst file at /stubboot/boot/grub/menu.lst.
This chapter provides examples of creating a boot environment, then upgrading and activating the new boot environment which then becomes the currently running system. This chapter contains the following sections:
Example of Upgrading With Solaris Live Upgrade (Command-Line Interface)
Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror) (Command-Line Interface)
Example of Upgrading Using Solaris Live Upgrade (Character User Interface)
In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 11/06 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command. An example of falling back to the original boot environment is also given.
Description |
For More Information |
|
---|---|---|
Caution – Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches. x86 only – Starting with the Solaris 10 1/06 release, if this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.
The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. |
Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site. |
|
If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. |
To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available at http://sunsolve.sun.com. |
Follow these steps to install the required patches.
From the SunSolve web site, obtain the list of patches.
# patchadd /net/server/export/patches # init 6 |
This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.
Insert the Solaris Operating System DVD or Solaris Software - 2 CD.
Follow the step for the media you are using.
If you are using the Solaris Operating System DVD, change the directory to the installer and run the installer.
For SPARC based systems:
# cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers # ./liveupgrade20 |
For x86 based systems:
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers # ./liveupgrade20 |
The Solaris installation program GUI is displayed.
If you are using the Solaris Software - 2 CD, run the installer.
% ./installer |
The Solaris installation program GUI is displayed.
From the Select Type of Install panel, click Custom.
On the Locale Selection panel, click the language to be installed.
Choose the software to install.
For DVD, on the Component Selection panel, click Next to install the packages.
For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click the other product choices to deselect this software.
Follow the directions on the Solaris installation program panels to install the software.
The source boot environment is named c0t4d0s0 by using the -c option. Naming the source boot environment is required only when the first boot environment is created. For more information about naming using the -c option, see the description in “To Create a Boot Environment for the First Time” Step 2.
The new boot environment is named c0t15d0s0. The -A option creates a description that is associated with the boot environment name.
The root (/) file system is copied to the new boot environment. Also, a new swap slice is created rather than sharing the source boot environment's swap slice.
# lucreate -A 'BE_description' -c /dev/dsk/c0t4d0s0 -m /:/dev/dsk/c0t15d0s0:ufs\ -m -:/dev/dsk/c0t15d0s1:swap -n /dev/dsk/c0t15d0s0 |
The inactive boot environment is named c0t15d0s0. The operating system image to be used for the upgrade is taken from the network.
# luupgrade -n c0t15d0s0 -u -s /net/ins-svr/export/Solaris_10 \ combined.solaris_wos |
The lustatus command reports if the boot environment creation is complete. lustatus also shows if the boot environment is bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t4d0s0 yes yes yes no - c0t15d0s0 yes no no yes - |
The c0t15d0s0 boot environment is made bootable with the luactivate command. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.
# luactivate c0t15d0s0 # init 6 |
The following procedures for falling back depend on your new boot environment activation situation:
For SPARC based systems:
The activation is successful, but you want to return to the original boot environment. See Example 9–1.
The activation fails and you can boot back to the original boot environment. See Example 9–2.
The activation fails and you must boot back to the original boot environment by using media or a net installation image. See Example 9–3.
For x86 based systems, starting with the Solaris 10 1/06 release and when you use the GRUB menu:
The activation fails, the GRUB menu is displayed correctly, but the new boot environment is not bootable. See Example 9–4
The activation fails and the GRUB menu does not display. See Example 9–5.
In this example, the original c0t4d0s0 boot environment is reinstated as the active boot environment although it was activated successfully. The device name is first_disk.
# /sbin/luactivate first_disk # init 6 |
In this example, the new boot environment was not bootable. You must return to the OK prompt before booting from the original boot environment, c0t4d0s0, in single-user mode.
OK boot net -s # /sbin/luactivate first_disk Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # init 6 |
The original boot environment, c0t4d0s0, becomes the active boot environment.
In this example, the new boot environment was not bootable. You cannot boot from the original boot environment and must use media or a net installation image. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.
OK boot net -s # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the GRUB menu.
In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. To enable a fallback, the original boot environment is booted in single-user mode.
Become superuser or assume an equivalent role.
To display the GRUB menu, reboot the system.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to Example 9–5.
Edit the GRUB menu by typing: e.
Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.
grub edit>kernel /boot/multiboot |
Boot to single user mode, by typing -s.
grub edit>kernel /boot/multiboot -s |
Boot and mount the boot environment. Then activate it.
# b # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the DVD or CD.
In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. To enable a fallback, the original boot environment is booted in single-user mode.
Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.
Become superuser or assume an equivalent role.
Boot from the DVD or CD.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Edit the GRUB menu by typing: e.
Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.
grub edit>kernel /boot/multiboot |
Boot to single user mode, by typing -s.
grub edit>kernel /boot/multiboot -s |
Boot and mount the boot environment. Then activate and reboot.
Edit the GRUB menu by typing: e Select the original boot environment by using the arrow keys. grub edit>kernel /boot/multiboot -s # b # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
This example shows you how to do the following tasks:
Create a RAID-1 volume (mirror) on a new boot environment
Break the mirror and upgrade one half of the mirror
Attach the other half of the mirror, the concatenation, to the new mirror
Figure 9–1 shows the current boot environment, which contains three physical disks.
Create a new boot environment, second_disk, that contains a mirror.
The following command performs these tasks.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system, which is copied to the mirror d10. All data on the mirror d10 is overwritten.
Two slices, c0t1d0s0 and c0t2d0s0, are specified to be used as submirrors. These two submirrors are attached to mirror d10.
# lucreate -c first_disk -n second_disk \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:attach \ -m /:/dev/dsk/c0t2d0s0:attach |
Activate the second_disk boot environment.
# /sbin/luactivate second_disk # init 6 |
Create another boot environment, third_disk.
The following command performs these tasks.
lucreate configures a UFS file system for the mount point root (/). A mirror, d20, is created.
Slice c0t1d0s0 is removed from its current mirror and is added to mirror d20. The contents of the submirror, the root (/) file system, are preserved and no copy occurs.
# lucreate -n third_disk \ -m /:/dev/md/dsk/d20:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve |
Upgrade the new boot environment, third_disk
# luupgrade -u -n third_disk \ -s /net/installmachine/export/Solaris_10/OS_image |
Add a patch to the upgraded boot environment.
# luupgrade -t n third_disk -s /net/patches 222222-01 |
Activate the third_disk boot environment to make this boot environment the currently running system.
# /sbin/luactivate third_disk # init 6 |
Delete the boot environment second_disk.
# ludelete second_disk |
The following commands perform these tasks.
Clear mirror d10.
Check for the number for the concatenation of c0t2d0s0.
Attach the concatenation that is found by the metastat command to the mirror d20. The metattach command synchronizes the newly attached concatenation with the concatenation in mirror d20. All data on the concatenation is overwritten.
# metaclear d10 # metastat -p | grep c0t2d0s0 dnum 1 1 c0t2d0s0 # metattach d20 dnum |
Is the number found in the metastat command for the concatenation
The new boot environment, third_disk, has been upgraded and is the currently running system. third_disk contains the root (/) file system that is mirrored.
Figure 9–2 shows the entire process of detaching a mirror and upgrading the mirror by using the commands in the preceding example.
Solaris Live Upgrade enables the creation of a new boot environment on RAID–1 volumes (mirrors). The current boot environment's file systems can be on any of the following:
A physical storage device
A Solaris Volume Manager controlled RAID–1 volume
A Veritas VXFS controlled volume
However, the new boot environment's target must be a Solaris Volume Manager RAID-1 volume. For example, the slice that is designated for a copy of the root (/) file system must be /dev/vx/dsk/rootvol. rootvol is the volume that contains the root (/) file system.
In this example, the current boot environment contains the root (/) file system on a volume that is not a Solaris Volume Manager volume. The new boot environment is created with the root (/) file system on the Solaris Volume Manager RAID-1 volume c0t2d0s0. The lucreate command migrates the current volume to the Solaris Volume Manager volume. The name of the new boot environment is svm_be. The lustatus command reports if the new boot environment is ready to be activated and be rebooted. The new boot environment is activated to become the current boot environment.
# lucreate -n svm_be -m /:/dev/md/dsk/d1:mirror,ufs \ -m /:/dev/dsk/c0t2d0s0:attach # lustatus # luactivate svm_be # lustatus # init 6 |
The following procedures cover the three-step process:
Creating the empty boot environment
Installing the archive
Activating the boot environment which then becomes the currently running boot environment.
The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When you use the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices. The boot environment is then activated.
In this first step, an empty boot environment is created. Slices are reserved for the file systems that are specified, but no copy of file systems from the current boot environment occurs. The new boot environment is named second_disk.
# lucreate -s - -m /:/dev/dsk/c0t1d0s0:ufs \ -n second_disk |
The boot environment is ready to be populated with a Solaris Flash archive.
Figure 9–3 shows the creation of an empty boot environment.
In this second step, an archive is installed on the second_disk boot environment that was created in the previous example. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris 10 11/06 releases. The archive is named Solaris_10.flar.
# luupgrade -f -n second_disk \ -s /net/installmachine/export/Solaris_10/OS_image \ -a /net/server/archive/10.flar |
The boot environment is ready to be activated.
In this last step, the second_disk boot environment is made bootable with the luactivate command. The system is then rebooted and second_disk becomes the active boot environment.
# luactivate second_disk # init 6 |
For step-by-step information about creating an empty boot environment, see To Create an Empty Boot Environment for a Solaris Flash Archive (Command-Line Interface).
For step-by-step information about creating a Solaris Flash archive, see Chapter 3, Creating Solaris Flash Archives (Tasks), in Solaris 10 11/06 Installation Guide: Solaris Flash Archives (Creation and Installation).
For step-by-step information about activating a boot environment or falling back to the original boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).
In this example, a new boot environment is created on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 6/06 release. The upgraded boot environment is then activated.
Insert the Solaris Operating System DVD or Solaris Software - 2 CD.
Run the installer for the media you are using.
If you are using the Solaris Operating System DVD, change directories to the installer and run the installer.
For SPARC based systems:
# cd /cdrom/cdrom0/S0/Solaris_10/Tools/Installers # ./liveupgrade20 |
The Solaris installation program GUI is displayed.
For x86 based systems:
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers # ./liveupgrade20 |
The Solaris installation program GUI is displayed.
If you are using the Solaris Software - 2 CD, run the installer.
% ./installer |
The Solaris installation program GUI is displayed.
From the Select Type of Install panel, click Custom.
On the Locale Selection panel, click the language to be installed.
Choose the software to install.
For DVD, on the Component Selection panel, click Next to install the packages.
For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click the other product choices to deselect the software.
Follow the directions on the Solaris installation program panels to install the software.
Description |
For More Information |
|
---|---|---|
Caution – Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches. x86 only – Starting with the Solaris 10 1/06 release, if this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.
The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. |
Ensure you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 at the SunSolve web site. |
|
If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. |
To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available on http://sunsolve.sun.com. |
Follow these steps to install the required patches.
From the SunSolve web site, obtain the list of patches.
# patchadd /net/server/export/patches # init 6 |
In this example, the source boot environment is named c0t4d0s0. The root (/) file system is copied to the new boot environment. Also, a new swap slice is created instead of sharing the source boot environment's swap slice.
Become superuser or assume an equivalent role.
Display the character user interface:
# /usr/sbin/lu |
The Solaris Live Upgrade Main Menu is displayed.
From the main menu, select Create.
Name of Current Boot Environment: c0t4d0s0 Name of New Boot Environment: c0t15d0s0 |
Press F3.
The Configuration menu is displayed.
To select a slice from the configuration menu, press F2.
The Choices menu is displayed.
Choose slice 0 from disk c0t15d0 for the root (/) file system.
From the configuration menu, create a new slice for swap on c0t15d0 by selecting a swap slice to be split.
To select a slice for swap, press F2. The Choices menu is displayed.
Select slice 1 from disk c0t15d0 for the new swap slice.
Press F3 to create the new boot environment.
The new boot environment is then upgraded. The new version of the operating system for the upgrade is taken from a network image.
From the main menu, select Upgrade.
Name of New Boot Environment: c0t15d0s0 Package Media: /net/ins3-svr/export/Solaris_10/combined.solaris_wos |
Press F3.
The c0t15d0s0 boot environment is made bootable. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.
From the main menu, select Activate.
Name of Boot Environment: c0t15d0s0 Do you want to force a Live Upgrade sync operations: no |
Press F3.
Press Return.
Type:
# init 6 |
If a fallback is necessary, use the command-line procedures in the previous example: (Optional) To Fall Back to the Source Boot Environment.
The following list shows commands that you can type at the command line. The Solaris Live Upgrade includes man pages for all the listed command-line utilities.
Task |
Command |
---|---|
Activate an inactive boot environment. | |
Cancel a scheduled copy or create job. | |
Compare an active boot environment with an inactive boot environment. | |
Recopy file systems to update an inactive boot environment. | |
Create a boot environment. | |
Name the active boot environment. | |
Delete a boot environment. | |
Add a description to a boot environment name. | |
List critical file systems for each boot environment. | |
Enable a mount of all of the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive. | |
Rename a boot environment. | |
List status of all boot environments. | |
Enable an unmount of all the file systems in a boot environment. This command enables you to modify the files in a boot environment while that boot environment is inactive. | |
Upgrade an OS or install a flash archive on an inactive boot environment. |