This chapter provides examples of creating a boot environment, then upgrading and activating the new boot environment which then becomes the currently running system.
This chapter describes Solaris Live Upgrade for UFS file systems. For procedures for migrating a UFS file system to a ZFS root pool or creating and installing a ZFS root pool, see Chapter 13, Creating a Boot Environment for ZFS Root Pools.
This chapter contains the following sections:
Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror)
Example of Migrating From an Existing Volume to a Solaris Volume Manager RAID-1 Volume
Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive
In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 10/09 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command. An example of falling back to the original boot environment is also given.
Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve Infodoc 206844. Search for the Infodoc 206844 (formerly 72099) on the SunSolve web site.
The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.
The following steps describe the steps in the SunSolve Infodoc 206844.
This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.
Become superuser or assume an equivalent role.
From the SunSolve web site, follow the instructions in Infodoc 206844 to remove and add Solaris Live Upgrade packages.
Remove existing Solaris Live Upgrade packages.
The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade or patch by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading or patching to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur |
Install the new Solaris Live Upgrade packages.
You can install the packages by using the liveupgrade20 command that is on the installation DVD or CD or by using the pkgadd command. The liveupgrade20 command requires Java software. If your system does not have Java software installed, then you need to use the pkgadd command to install the packages. See the SunSolve Infodoc for more information.
If you are using the Solaris Operating System DVD, change directories and run the installer:
Change directories.
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers |
For SPARC based systems, the path to the installer is different for releases previous to the Solaris 10 10/08 release:
# cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers |
Run the installer
# ./liveupgrade20 -noconsole - nodisplay |
The -noconsole and -nodisplay options prevent the character user interface (CUI) from displaying.
The Solaris Live Upgrade CUI is no longer supported.
If you are using the Solaris Software – 2 CD, you can run the installer without changing the path.
% ./installer |
Verify that the packages have been installed successfully.
# pkgchk -v SUNWlucfg SUNWlur SUNWluu |
Install the patches listed in Infodoc 206844.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches.
From the SunSolve web site, obtain the list of patches.
Change to the patch directory as in this example.
# cd /var/tmp/lupatches |
Install the patches.
# patchadd -M path-to-patchespatch-id patch-id |
path-to-patches is the patch to the patch directory such as /var/tmp/lupatches. patch-id is the patch number or numbers. Separate multiple patch names with a space.
The patches need to be applied in the order specified in infodoc 206844.
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required. Otherwise, Solaris Live Upgrade fails.
# init 6 |
You now have the packages and patches necessary for a successful creation of a new boot environment.
The source boot environment is named c0t4d0s0 by using the -c option. Naming the source boot environment is required only when the first boot environment is created. For more information about naming using the -c option, see the description in “To Create a Boot Environment for the First Time” Step 2.
The new boot environment is named c0t15d0s0. The -A option creates a description that is associated with the boot environment name.
The root (/) file system is copied to the new boot environment. Also, a new swap slice is created rather than sharing the source boot environment's swap slice.
# lucreate -A 'BE_description' -c /dev/dsk/c0t4d0s0 -m /:/dev/dsk/c0t15d0s0:ufs\ -m -:/dev/dsk/c0t15d0s1:swap -n /dev/dsk/c0t15d0s0 |
The inactive boot environment is named c0t15d0s0. The operating system image to be used for the upgrade is taken from the network.
# luupgrade -n c0t15d0s0 -u -s /net/ins-svr/export/Solaris_10 \ combined.solaris_wos |
The lustatus command reports if the boot environment creation is complete. lustatus also shows if the boot environment is bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t4d0s0 yes yes yes no - c0t15d0s0 yes no no yes - |
The c0t15d0s0 boot environment is made bootable with the luactivate command. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.
# luactivate c0t15d0s0 # init 6 |
The following procedures for falling back depend on your new boot environment activation situation:
For SPARC based systems:
The activation is successful, but you want to return to the original boot environment. See Example 9–1.
The activation fails and you can boot back to the original boot environment. See Example 9–2.
The activation fails and you must boot back to the original boot environment by using media or a net installation image. See Example 9–3.
For x86 based systems, starting with the Solaris 10 1/06 release and when you use the GRUB menu:
The activation fails, the GRUB menu is displayed correctly, but the new boot environment is not bootable. See Example 9–4
The activation fails and the GRUB menu does not display. See Example 9–5.
In this example, the original c0t4d0s0 boot environment is reinstated as the active boot environment although it was activated successfully. The device name is first_disk.
# /sbin/luactivate first_disk # init 6 |
In this example, the new boot environment was not bootable. You must return to the OK prompt before booting from the original boot environment, c0t4d0s0, in single-user mode.
OK boot net -s # /sbin/luactivate first_disk Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # init 6 |
The original boot environment, c0t4d0s0, becomes the active boot environment.
In this example, the new boot environment was not bootable. You cannot boot from the original boot environment and must use media or a net installation image. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.
OK boot net -s # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the GRUB menu.
In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. To enable a fallback, the original boot environment is booted in single-user mode.
Become superuser or assume an equivalent role.
To display the GRUB menu, reboot the system.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to Example 9–5.
Edit the GRUB menu by typing: e.
Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.
grub edit>kernel /boot/multiboot |
Boot to single user mode, by typing -s.
grub edit>kernel /boot/multiboot -s |
Boot and mount the boot environment. Then activate it.
# b # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the DVD or CD.
In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. To enable a fallback, the original boot environment is booted in single-user mode.
Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.
Become superuser or assume an equivalent role.
Boot from the DVD or CD.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris 10 10/09 | |Solaris 10 10/09 Serial Console ttya | |Solaris 10 10/09 Serial Console ttyb (for lx50, v60x and v65x | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Wait for the default option to boot or choose any option displayed.
The installation screen is displayed.
+-------------------------------------------------------------------+ |Select the type of installation you want to perform: | | | | 1 Solaris Interactive | | 2 Custom JumpStart | | 3 Solaris Interactive Text (Desktop session) | | 4 Solaris Interactive Text (Console session) | | 5 Apply driver updates | | 6 Single user shell | | | | Enter the number of your choice followed by the <ENTER> key.| | Alternatively, enter custom boot arguments directly. | | | If you wait 30 seconds without typing anything, | | an interactive installation will be started. | +----------------------------------------------------------------- --+ |
Choose the “Single user shell” option.
The following message is displayed.
Do you wish to automatically update the boot archive? y /n |
Type: n
Starting shell... # |
You are now in single user mode.
Mount the boot environment. Then activate and reboot.
# fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
This example shows you how to do the following tasks:
Create a RAID-1 volume (mirror) on a new boot environment
Break the mirror and upgrade one half of the mirror
Attach the other half of the mirror, the concatenation, to the new mirror
Figure 9–1 shows the current boot environment, which contains three physical disks.
Create a new boot environment, second_disk, that contains a mirror.
The following command performs these tasks.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system, which is copied to the mirror d10. All data on the mirror d10 is overwritten.
Two slices, c0t1d0s0 and c0t2d0s0, are specified to be used as submirrors. These two submirrors are attached to mirror d10.
# lucreate -c first_disk -n second_disk \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:attach \ -m /:/dev/dsk/c0t2d0s0:attach |
Activate the second_disk boot environment.
# /sbin/luactivate second_disk # init 6 |
Create another boot environment, third_disk.
The following command performs these tasks.
lucreate configures a UFS file system for the mount point root (/). A mirror, d20, is created.
Slice c0t1d0s0 is removed from its current mirror and is added to mirror d20. The contents of the submirror, the root (/) file system, are preserved and no copy occurs.
# lucreate -n third_disk \ -m /:/dev/md/dsk/d20:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve |
Upgrade the new boot environment, third_disk
# luupgrade -u -n third_disk \ -s /net/installmachine/export/Solaris_10/OS_image |
Add a patch to the upgraded boot environment.
# luupgrade -t n third_disk -s /net/patches 222222-01 |
Activate the third_disk boot environment to make this boot environment the currently running system.
# /sbin/luactivate third_disk # init 6 |
Delete the boot environment second_disk.
# ludelete second_disk |
The following commands perform these tasks.
Clear mirror d10.
Check for the number for the concatenation of c0t2d0s0.
Attach the concatenation that is found by the metastat command to the mirror d20. The metattach command synchronizes the newly attached concatenation with the concatenation in mirror d20. All data on the concatenation is overwritten.
# metaclear d10 # metastat -p | grep c0t2d0s0 dnum 1 1 c0t2d0s0 # metattach d20 dnum |
Is the number found in the metastat command for the concatenation
The new boot environment, third_disk, has been upgraded and is the currently running system. third_disk contains the root (/) file system that is mirrored.
Figure 9–2 shows the entire process of detaching a mirror and upgrading the mirror by using the commands in the preceding example.
Solaris Live Upgrade enables the creation of a new boot environment on RAID–1 volumes (mirrors). The current boot environment's file systems can be on any of the following:
A physical storage device
A Solaris Volume Manager controlled RAID–1 volume
A Veritas VXFS controlled volume
However, the new boot environment's target must be a Solaris Volume Manager RAID-1 volume. For example, the slice that is designated for a copy of the root (/) file system must be /dev/vx/dsk/rootvol. rootvol is the volume that contains the root (/) file system.
In this example, the current boot environment contains the root (/) file system on a volume that is not a Solaris Volume Manager volume. The new boot environment is created with the root (/) file system on the Solaris Volume Manager RAID-1 volume c0t2d0s0. The lucreate command migrates the current volume to the Solaris Volume Manager volume. The name of the new boot environment is svm_be. The lustatus command reports if the new boot environment is ready to be activated and be rebooted. The new boot environment is activated to become the current boot environment.
# lucreate -n svm_be -m /:/dev/md/dsk/d1:mirror,ufs \ -m /:/dev/dsk/c0t2d0s0:attach # lustatus # luactivate svm_be # lustatus # init 6 |
The following procedures cover the three-step process:
Creating the empty boot environment
Installing the archive
Activating the boot environment which then becomes the currently running boot environment.
The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When you use the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices. The boot environment is then activated.
In this first step, an empty boot environment is created. Slices are reserved for the file systems that are specified, but no copy of file systems from the current boot environment occurs. The new boot environment is named second_disk.
# lucreate -s - -m /:/dev/dsk/c0t1d0s0:ufs \ -n second_disk |
The boot environment is ready to be populated with a Solaris Flash archive.
Figure 9–3 shows the creation of an empty boot environment.
In this second step, an archive is installed on the second_disk boot environment that was created in the previous example. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris 10 10/09 releases. The archive is named Solaris_10.flar.
# luupgrade -f -n second_disk \ -s /net/installmachine/export/Solaris_10/OS_image \ -a /net/server/archive/10.flar |
The boot environment is ready to be activated.
In this last step, the second_disk boot environment is made bootable with the luactivate command. The system is then rebooted and second_disk becomes the active boot environment.
# luactivate second_disk # init 6 |
For step-by-step information about creating an empty boot environment, see To Create an Empty Boot Environment for a Solaris Flash Archive.
For step-by-step information about creating a Solaris Flash archive, see Chapter 3, Creating Solaris Flash Archives (Tasks), in Solaris 10 10/09 Installation Guide: Solaris Flash Archives (Creation and Installation).
For step-by-step information about activating a boot environment or falling back to the original boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).