Skip Navigation Links | |
Exit Print View | |
Oracle Solaris 10 8/11 Installation Guide: Live Upgrade and Upgrade Planning Oracle Solaris 10 8/11 Information Library |
Part I Upgrading With Live Upgrade
1. Where to Find Oracle Solaris Installation Planning Information
4. Using Live Upgrade to Create a Boot Environment (Tasks)
5. Upgrading With Live Upgrade (Tasks)
6. Failure Recovery: Falling Back to the Original Boot Environment (Tasks)
7. Maintaining Live Upgrade Boot Environments (Tasks)
8. Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed
10. Live Upgrade (Command Reference)
Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool
11. Live Upgrade and ZFS (Overview)
12. Live Upgrade for ZFS (Planning)
13. Creating a Boot Environment for ZFS Root Pools
Migrating a UFS File System to a ZFS File System
How to Migrate a UFS File System to a ZFS File System
Creating a Boot Environment Within the Same ZFS Root Pool
How to Create a ZFS Boot Environment Within the Same ZFS Root Pool
Creating a Boot Environment In a New Root Pool
How to Create a Boot Environment on a New ZFS Root Pool
Creating a Boot Environment From a Source Other Than the Currently Running System
Falling Back to a ZFS Boot Environment
14. Live Upgrade For ZFS With Non-Global Zones Installed
B. Additional SVR4 Packaging Requirements (Reference)
If you have an existing ZFS root pool and want to create a new ZFS boot environment within that pool, the following procedure provides the steps. After the inactive boot environment is created, the new boot environment can be upgraded and activated at your convenience. The -p option is not required when you create a boot environment within the same pool.
The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.
The following substeps describe the steps in the My Oracle Support knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844).
Note - Using Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Live Upgrade software to perform the tasks.
Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.
The three Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.
Note - The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Live Upgrade packages from a previous release, you do not need to remove this package.
# pkgrm SUNWlucfg SUNWluu SUNWlur
Ensure that you have the most recently updated patch list by consulting My Oracle Support. Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on the My Oracle Support web site.
If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.
From the My Oracle Support web site, obtain the list of patches.
Change to the patch directory as in this example.
# cd /var/tmp/lupatches
Install the patches with the patchadd command.
# patchadd -M path-to-patches patch_id patch_id
path-to-patches is the patch to the patch directory such as /var/tmp/lupatches. patch_id is the patch number or numbers. Separate multiple patch names with a space.
Note - The patches need to be applied in the order that is specified in the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844).
Reboot the system if necessary. Certain patches require a reboot to be effective.
x86 only: Rebooting the system is required or Live Upgrade fails.
# init 6
You now have the packages and patches necessary for a successful creation of a new boot environment.
# lucreate [-c zfsBE] -n new-zfsBE
Assigns the name zfsBE to the current boot environment. This option is not required and is used only when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.
Assigns the name to the boot environment to be created. The name must be unique on the system.
The creation of the new boot environment is almost instantaneous. A snapshot is created of each dataset in the current ZFS root pool, and a clone is then created from each snapshot. Snapshots are very disk-space efficient, and this process uses minimal disk space. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.
The lustatus command reports whether the boot environment creation is complete and bootable.
# lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - new-zfsBE yes no no yes -
In this example, the ZFS root pool is named rpool, and the @ symbol indicates a snapshot. The new boot environment mount points are temporary until the luactivate command is executed. The /dump and /swap volumes are shared with the ZFS root pool and boot environments within the root pool.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.38G 57.6G 551M rpool/ROOT/zfsBE@new-zfsBE 66.5K - 551M - rpool/ROOT/new-zfsBE 85.5K 57.6G 551M /tmp/.alt.103197 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G -
You can now upgrade and activate the new boot environment. See Example 13-2.
Example 13-2 Creating a Boot Environment Within the Same ZFS Root Pool
The following commands create a new ZFS boot environment, new-zfsBE. The -p option is not required because the boot environment is being created within the same root pool.
# lucreate [-c zfsBE] -n new-zfsBE Analyzing system configuration. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Creating configuration for boot environment new-zfsBE. Source boot environment is zfsBE. Creating boot environment new-zfsBE. Cloning file systems from boot environment zfsBE to create boot environment new-zfsBE. Creating snapshot for <rpool> on <rpool> Creating clone for <rpool>. Setting canmount=noauto for <rpool> in zone <global> on <rpool>. Population of boot environment zfsBE successful on <rpool>. # lustatus boot environment Is Active Active Can Copy Name Complete Now OnReboot Delete Status ------------------------------------------------------------------------ zfsBE yes yes yes no - new-zfsBE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 9.29G 57.6G 20K /rpool rpool/ROOT 5.38G 57.6G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.38G 57.6G 551M rpool/ROOT/zfsBE@new-zfsBE 66.5K - 551M - rpool/ROOT/new-zfsBE 85.5K 57.6G 551M /tmp/.alt.103197 rpool/dump 1.95G - 1.95G - rpool/swap 1.95G - 1.95G -
You can now upgrade and activate the new boot environment. For an example of upgrading a ZFS boot environment, see Example 13-1. For more examples of using the luupgrade command, see Chapter 5, Upgrading With Live Upgrade (Tasks).
# luactivate new-zfsBE ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** In case of a failure while booting to the target BE, the following process needs to be followed to fallback to the currently working boot environment: 1. Enter the PROM monitor (ok prompt). 2. Change the boot device back to the original boot environment by typing: setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a 3. Boot to the original boot environment by typing: boot ********************************************************************** Modifying boot archive service Activation of boot environment <new-zfsBE> successful.
Reboot the system to the ZFS boot environment.
# init 6 # svc.startd: The system is coming down. Please wait. svc.startd: 79 system services are now being stopped. . . .