This chapter provides examples of creating a boot environment, then upgrading and activating the new boot environment which then becomes the currently running system. This chapter contains the following sections:
Example of Detaching and Upgrading One Side of a RAID-1 Volume (Mirror)
Example of Migrating From an Existing Volume to a Solaris Volume Manager RAID-1 Volume
Example of Creating an Empty Boot Environment and Installing a Solaris Flash Archive
In this example, a new boot environment is created by using the lucreate command on a system that is running the Solaris 9 release. The new boot environment is upgraded to the Solaris 10 8/07 release by using the luupgrade command. The upgraded boot environment is activated by using the luactivate command. An example of falling back to the original boot environment is also given.
Description |
For More Information |
|
---|---|---|
Caution – Correct operation of Solaris Live Upgrade requires that a limited set of patch revisions be installed for a particular OS version. Before installing or running Solaris Live Upgrade, you are required to install these patches. x86 only – Starting with the Solaris 10 1/06 release, if this set of patches is not installed, Solaris Live Upgrade fails and you might see the following error message. If you don't see the following error message, necessary patches still might not be installed. Always verify that all patches listed on the SunSolve info doc have been installed before attempting to install Solaris Live Upgrade.
The patches listed in info doc 72099 are subject to change at any time. These patches potentially fix defects in Solaris Live Upgrade, as well as fix defects in components that Solaris Live Upgrade depends on. If you experience any difficulties with Solaris Live Upgrade, please check and make sure that you have the latest Solaris Live Upgrade patches installed. |
Ensure that you have the most recently updated patch list by consulting http://sunsolve.sun.com. Search for the info doc 72099 on the SunSolve web site. |
|
If you are running the Solaris 8 or Solaris 9 OS, you might not be able to run the Solaris Live Upgrade installer. These releases do not contain the set of patches needed to run the Java 2 runtime environment. You must have the recommended patch cluster for the Java 2 runtime environment that is recommended to run the Solaris Live Upgrade installer and install the packages. |
To install the Solaris Live Upgrade packages, use the pkgadd command. Or install, for the Java 2 runtime environment, the recommended patch cluster. The patch cluster is available at http://sunsolve.sun.com. |
Follow these steps to install the required patches.
From the SunSolve web site, obtain the list of patches.
# patchadd /net/server/export/patches # init 6 |
This procedure assumes that the system is running Volume Manager. For detailed information about managing removable media with the Volume Manager, refer to System Administration Guide: Devices and File Systems.
Insert the Solaris Operating System DVD or Solaris Software - 2 CD.
Follow the step for the media you are using.
If you are using the Solaris Operating System DVD, change the directory to the installer and run the installer.
For SPARC based systems:
# cd /cdrom/cdrom0/s0/Solaris_10/Tools/Installers # ./liveupgrade20 |
For x86 based systems:
# cd /cdrom/cdrom0/Solaris_10/Tools/Installers # ./liveupgrade20 |
The Solaris installation program GUI is displayed.
If you are using the Solaris Software - 2 CD, run the installer.
% ./installer |
The Solaris installation program GUI is displayed.
From the Select Type of Install panel, click Custom.
On the Locale Selection panel, click the language to be installed.
Choose the software to install.
For DVD, on the Component Selection panel, click Next to install the packages.
For CD, on the Product Selection panel, click Default Install for Solaris Live Upgrade and click the other product choices to deselect this software.
Follow the directions on the Solaris installation program panels to install the software.
The source boot environment is named c0t4d0s0 by using the -c option. Naming the source boot environment is required only when the first boot environment is created. For more information about naming using the -c option, see the description in “To Create a Boot Environment for the First Time” Step 2.
The new boot environment is named c0t15d0s0. The -A option creates a description that is associated with the boot environment name.
The root (/) file system is copied to the new boot environment. Also, a new swap slice is created rather than sharing the source boot environment's swap slice.
# lucreate -A 'BE_description' -c /dev/dsk/c0t4d0s0 -m /:/dev/dsk/c0t15d0s0:ufs\ -m -:/dev/dsk/c0t15d0s1:swap -n /dev/dsk/c0t15d0s0 |
The inactive boot environment is named c0t15d0s0. The operating system image to be used for the upgrade is taken from the network.
# luupgrade -n c0t15d0s0 -u -s /net/ins-svr/export/Solaris_10 \ combined.solaris_wos |
The lustatus command reports if the boot environment creation is complete. lustatus also shows if the boot environment is bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ c0t4d0s0 yes yes yes no - c0t15d0s0 yes no no yes - |
The c0t15d0s0 boot environment is made bootable with the luactivate command. The system is then rebooted and c0t15d0s0 becomes the active boot environment. The c0t4d0s0 boot environment is now inactive.
# luactivate c0t15d0s0 # init 6 |
The following procedures for falling back depend on your new boot environment activation situation:
For SPARC based systems:
The activation is successful, but you want to return to the original boot environment. See Example 10–1.
The activation fails and you can boot back to the original boot environment. See Example 10–2.
The activation fails and you must boot back to the original boot environment by using media or a net installation image. See Example 10–3.
For x86 based systems, starting with the Solaris 10 1/06 release and when you use the GRUB menu:
The activation fails, the GRUB menu is displayed correctly, but the new boot environment is not bootable. See Example 10–4
The activation fails and the GRUB menu does not display. See Example 10–5.
In this example, the original c0t4d0s0 boot environment is reinstated as the active boot environment although it was activated successfully. The device name is first_disk.
# /sbin/luactivate first_disk # init 6 |
In this example, the new boot environment was not bootable. You must return to the OK prompt before booting from the original boot environment, c0t4d0s0, in single-user mode.
OK boot net -s # /sbin/luactivate first_disk Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # init 6 |
The original boot environment, c0t4d0s0, becomes the active boot environment.
In this example, the new boot environment was not bootable. You cannot boot from the original boot environment and must use media or a net installation image. The device is /dev/dsk/c0t4d0s0. The original boot environment, c0t4d0s0, becomes the active boot environment.
OK boot net -s # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the GRUB menu.
In this example, the GRUB menu is displayed correctly, but the new boot environment is not bootable. To enable a fallback, the original boot environment is booted in single-user mode.
Become superuser or assume an equivalent role.
To display the GRUB menu, reboot the system.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris | |Solaris failsafe | |second_disk | |second_disk failsafe | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
From the GRUB menu, select the original boot environment. The boot environment must have been created with GRUB software. A boot environment that was created before the Solaris 10 1/06 release is not a GRUB boot environment. If you do not have a bootable GRUB boot environment, then skip to Example 10–5.
Edit the GRUB menu by typing: e.
Select kernel /boot/multiboot by using the arrow keys and type e. The grub edit menu is displayed.
grub edit>kernel /boot/multiboot |
Boot to single user mode, by typing -s.
grub edit>kernel /boot/multiboot -s |
Boot and mount the boot environment. Then activate it.
# b # fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
Starting with the Solaris 10 1/06 release, the following example provides the steps to fall back by using the DVD or CD.
In this example, the new boot environment was not bootable. Also, the GRUB menu does not display. To enable a fallback, the original boot environment is booted in single-user mode.
Insert the Solaris Operating System for x86 Platforms DVD or Solaris Software for x86 Platforms - 1 CD.
Become superuser or assume an equivalent role.
Boot from the DVD or CD.
# init 6 |
The GRUB menu is displayed.
GNU GRUB version 0.95 (616K lower / 4127168K upper memory) +-------------------------------------------------------------------+ |Solaris 10 8/07 | |Solaris 10 8/07 Serial Console ttya | |Solaris 10 8/07 Serial Console ttyb (for lx50, v60x and v65x | +-------------------------------------------------------------------+ Use the ^ and v keys to select which entry is highlighted. Press enter to boot the selected OS, 'e' to edit the commands before booting, or 'c' for a command-line. |
Wait for the default option to boot or choose any option displayed.
The installation screen is displayed.
+-------------------------------------------------------------------+ |Select the type of installation you want to perform: | | | | 1 Solaris Interactive | | 2 Custom JumpStart | | 3 Solaris Interactive Text (Desktop session) | | 4 Solaris Interactive Text (Console session) | | 5 Apply driver updates | | 6 Single user shell | | | | Enter the number of your choice followed by the <ENTER> key.| | Alternatively, enter custom boot arguments directly. | | | If you wait 30 seconds without typing anything, | | an interactive installation will be started. | +----------------------------------------------------------------- --+ |
Choose the “Single user shell” option.
The following message is displayed.
Do you wish to automatically update the boot archive? y /n |
Type: n
Starting shell... # |
You are now in single user mode.
Mount the boot environment. Then activate and reboot.
# fsck /dev/dsk/c0t4d0s0 # mount /dev/dsk/c0t4d0s0 /mnt # /mnt/sbin/luactivate Do you want to fallback to activate boot environment c0t4d0s0 (yes or no)? yes # umount /mnt # init 6 |
This example shows you how to do the following tasks:
Create a RAID-1 volume (mirror) on a new boot environment
Break the mirror and upgrade one half of the mirror
Attach the other half of the mirror, the concatenation, to the new mirror
Figure 10–1 shows the current boot environment, which contains three physical disks.
Create a new boot environment, second_disk, that contains a mirror.
The following command performs these tasks.
lucreate configures a UFS file system for the mount point root (/). A mirror, d10, is created. This mirror is the receptacle for the current boot environment's root (/) file system, which is copied to the mirror d10. All data on the mirror d10 is overwritten.
Two slices, c0t1d0s0 and c0t2d0s0, are specified to be used as submirrors. These two submirrors are attached to mirror d10.
# lucreate -c first_disk -n second_disk \ -m /:/dev/md/dsk/d10:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:attach \ -m /:/dev/dsk/c0t2d0s0:attach |
Activate the second_disk boot environment.
# /sbin/luactivate second_disk # init 6 |
Create another boot environment, third_disk.
The following command performs these tasks.
lucreate configures a UFS file system for the mount point root (/). A mirror, d20, is created.
Slice c0t1d0s0 is removed from its current mirror and is added to mirror d20. The contents of the submirror, the root (/) file system, are preserved and no copy occurs.
# lucreate -n third_disk \ -m /:/dev/md/dsk/d20:ufs,mirror \ -m /:/dev/dsk/c0t1d0s0:detach,attach,preserve |
Upgrade the new boot environment, third_disk
# luupgrade -u -n third_disk \ -s /net/installmachine/export/Solaris_10/OS_image |
Add a patch to the upgraded boot environment.
# luupgrade -t n third_disk -s /net/patches 222222-01 |
Activate the third_disk boot environment to make this boot environment the currently running system.
# /sbin/luactivate third_disk # init 6 |
Delete the boot environment second_disk.
# ludelete second_disk |
The following commands perform these tasks.
Clear mirror d10.
Check for the number for the concatenation of c0t2d0s0.
Attach the concatenation that is found by the metastat command to the mirror d20. The metattach command synchronizes the newly attached concatenation with the concatenation in mirror d20. All data on the concatenation is overwritten.
# metaclear d10 # metastat -p | grep c0t2d0s0 dnum 1 1 c0t2d0s0 # metattach d20 dnum |
Is the number found in the metastat command for the concatenation
The new boot environment, third_disk, has been upgraded and is the currently running system. third_disk contains the root (/) file system that is mirrored.
Figure 10–2 shows the entire process of detaching a mirror and upgrading the mirror by using the commands in the preceding example.
Solaris Live Upgrade enables the creation of a new boot environment on RAID–1 volumes (mirrors). The current boot environment's file systems can be on any of the following:
A physical storage device
A Solaris Volume Manager controlled RAID–1 volume
A Veritas VXFS controlled volume
However, the new boot environment's target must be a Solaris Volume Manager RAID-1 volume. For example, the slice that is designated for a copy of the root (/) file system must be /dev/vx/dsk/rootvol. rootvol is the volume that contains the root (/) file system.
In this example, the current boot environment contains the root (/) file system on a volume that is not a Solaris Volume Manager volume. The new boot environment is created with the root (/) file system on the Solaris Volume Manager RAID-1 volume c0t2d0s0. The lucreate command migrates the current volume to the Solaris Volume Manager volume. The name of the new boot environment is svm_be. The lustatus command reports if the new boot environment is ready to be activated and be rebooted. The new boot environment is activated to become the current boot environment.
# lucreate -n svm_be -m /:/dev/md/dsk/d1:mirror,ufs \ -m /:/dev/dsk/c0t2d0s0:attach # lustatus # luactivate svm_be # lustatus # init 6 |
The following procedures cover the three-step process:
Creating the empty boot environment
Installing the archive
Activating the boot environment which then becomes the currently running boot environment.
The lucreate command creates a boot environment that is based on the file systems in the active boot environment. When you use the lucreate command with the -s - option, lucreate quickly creates an empty boot environment. The slices are reserved for the file systems specified, but no file systems are copied. The boot environment is named, but not actually created until installed with a Solaris Flash archive. When the empty boot environment is installed with an archive, file systems are installed on the reserved slices. The boot environment is then activated.
In this first step, an empty boot environment is created. Slices are reserved for the file systems that are specified, but no copy of file systems from the current boot environment occurs. The new boot environment is named second_disk.
# lucreate -s - -m /:/dev/dsk/c0t1d0s0:ufs \ -n second_disk |
The boot environment is ready to be populated with a Solaris Flash archive.
Figure 10–3 shows the creation of an empty boot environment.
In this second step, an archive is installed on the second_disk boot environment that was created in the previous example. The archive is located on the local system. The operating system versions for the -s and -a options are both Solaris 10 8/07 releases. The archive is named Solaris_10.flar.
# luupgrade -f -n second_disk \ -s /net/installmachine/export/Solaris_10/OS_image \ -a /net/server/archive/10.flar |
The boot environment is ready to be activated.
In this last step, the second_disk boot environment is made bootable with the luactivate command. The system is then rebooted and second_disk becomes the active boot environment.
# luactivate second_disk # init 6 |
For step-by-step information about creating an empty boot environment, see To Create an Empty Boot Environment for a Solaris Flash Archive.
For step-by-step information about creating a Solaris Flash archive, see Chapter 3, Creating Solaris Flash Archives (Tasks), in Solaris 10 8/07 Installation Guide: Solaris Flash Archives (Creation and Installation).
For step-by-step information about activating a boot environment or falling back to the original boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).