JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

Part I Upgrading With Live Upgrade

1.  Where to Find Oracle Solaris Installation Planning Information

2.  Live Upgrade (Overview)

3.  Live Upgrade (Planning)

4.  Using Live Upgrade to Create a Boot Environment (Tasks)

5.  Upgrading With Live Upgrade (Tasks)

6.  Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

7.  Maintaining Live Upgrade Boot Environments (Tasks)

8.  Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed

9.  Live Upgrade Examples

Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool

10.  Live Upgrade and ZFS (Overview)

11.  Live Upgrade for ZFS (Planning)

12.  Creating a Boot Environment for ZFS Root Pools

Migrating a UFS File System to a ZFS File System

How to Migrate a UFS File System to a ZFS File System

Creating a Boot Environment Within the Same ZFS Root Pool

How to Create a ZFS Boot Environment Within the Same ZFS Root Pool

Creating a Boot Environment In a New Root Pool

How to Create a Boot Environment on a New ZFS Root Pool

Creating a Boot Environment From a Source Other Than the Currently Running System

Falling Back to a ZFS Boot Environment

13.  Live Upgrade for ZFS With Non-Global Zones Installed

Part III Appendices

A.  Live Upgrade Command Reference

B.  Troubleshooting (Tasks)

C.  Additional SVR4 Packaging Requirements (Reference)

D.  Using the Patch Analyzer When Upgrading (Tasks)

Glossary

Index

Creating a Boot Environment In a New Root Pool

If you have an existing ZFS root pool and want to create a new ZFS boot environment in a new root pool, the following procedure provides the steps. After the inactive boot environment is created, the new boot environment can be upgraded and activated at your convenience. The -p option is required to note where to place the new boot environment. The existing ZFS root pool must exist and be on a separate slice to be bootable and upgradeable.

How to Create a Boot Environment on a New ZFS Root Pool

  1. Before running Live Upgrade for the first time, you must install the latest Live Upgrade packages from installation media and install the patches listed in the knowledge document. Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on the My Oracle Support web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support.


    Note - Using Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Live Upgrade software to perform the tasks.


    1. Become superuser or assume an equivalent role.

      Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.


    2. From the My Oracle Support web site, follow the instructions in the knowledge document 1004881.1 - to remove and add Live Upgrade packages.

      The three Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


      Note - The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Live Upgrade packages from a previous release, you do not need to remove this package.


      # pkgrm SUNWlucfg SUNWluu SUNWlur
    3. Install the new Live Upgrade packages. For instructions, see  Installing Live Upgrade.
    4. Before running Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting My Oracle Support. Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the My Oracle Support web site, obtain the list of patches.

      • Change to the patch directory as in this example.

        # cd /var/tmp/lupatches
      • Install the patches with the patchadd command.

        # patchadd -M path-to-patches patch_id patch_id

        path-to-patches is the patch to the patch directory such as /var/tmp/lupatches. patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note - The patches need to be applied in the order that is specified in the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on My Oracle Support.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Live Upgrade fails.

        # init 6

        You now have the packages and patches necessary for a successful migration.

  2. Create a ZFS root pool.

    The ZFS root pool must be on a single slice to be bootable and upgradeable.

    # zpool create rpool2 c0t1d0s5
    rpool2

    Names of the new ZFS root pool.

    c0t1d0s5

    Specifies to place rpool2 on the bootable slice, c0t1d0s5.

    For information about creating a new root pool, see the Oracle Solaris ZFS Administration Guide.

  3. Create the new boot environment.
    # lucreate [-c zfsBE] -n new-zfsBE -p rpool2
    zfsBE

    The name for the current ZFS boot environment.

    new-zfsBE

    The name for the boot environment to be created. The name must be unique on the system.

    -p rpool2

    Places the newly created ZFS root boot environment into the ZFS root pool defined in rpool2.

    The creation of the new ZFS boot environment might take a while. The file system data is being copied to the new ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  4. (Optional) Verify that the boot environment is complete.

    The lustatus command reports whether the boot environment creation is complete and bootable.

    # lustatus
    boot environment   Is        Active  Active     Can        Copy 
    Name               Complete  Now     OnReboot   Delete     Status 
    ------------------------------------------------------------------------ 
    zfsBE                       yes      yes     yes        no        - 
    new-zfsBE                   yes      no      no         yes        -
  5. (Optional) Verify the basic dataset information on the system.

    The following example displays the names of all datasets on the system. The mount point listed for the new boot environment are temporary until the luactivate command is executed. The new boot environment shares the volumes, rpool2/dump and rpool2/swap, with the rpool2 ZFS boot environment.

    # zfs list
    NAME                                   USED  AVAIL  REFER  MOUNTPOINT
    rpool                                     11.4G  2.95G    31K  /rpool
    rpool/ROOT                                4.34G  2.95G    31K  legacy
    rpool/ROOT/new-zfsBE                      4.34G  2.95G  4.34G  /
    rpool/dump                                2.06G  5.02G    16K  -
    rpool/swap                                5.04G  7.99G    16K  -

    You can now upgrade and activate the new boot environment.

Example 12-3 Creating a Boot Environment on a New Root Pool

In this example, a new ZFS root pool, newPool, is created on a separate slice, c0t1s0s5. The lucreate command creates a new ZFS boot environment, new-zfsbe. The -p option is required because the boot environment is being created in a different root pool.

# zpool create newPool C0t2d0s5
# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
newPool                          92.5K  18.7G    31K  /newPool 
rpool                                     11.4G  2.95G    31K  /rpool
rpool/ROOT                                4.34G  2.95G    31K  legacy
rpool/ROOT/zfsBE                      4.34G  2.95G  4.34G  /
rpool/dump                                2.06G  5.02G    16K  -
rpool/swap                                5.04G  7.99G    16K  -
# lucreate -c c0t1d0s5 -n new-zfsbe -p newPool 
Checking GRUB menu...
Analyzing system configuration.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <new-zfsbe>.
Source boot environment is <c0t1d0s5>.
Creating file systems on boot environment <new-zfsbe>.
Creating <zfs> file system for </> in zone <global> on <newPool/ROOT/new-zfsbe>.
Populating file systems on boot environment <new-zfsbe>.
Analyzing zones.
Mounting ABE <new-zfsbe>.
Generating file list.
Copying data from PBE <c0t1d0s5> to ABE <new-zfsbe>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <new-zfsbe>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <c0t1d0s5>.
Making boot environment <new-zfsbe> bootable.
Updating bootenv.rc on ABE <new-zfsbe>.
Saving existing file </boot/grub/menu.lst> in top level dataset for BE <new-zfsBE> as <mount-point>//boot/grub/menu.lst.prev.
File </boot/grub/menu.lst> propagation successful
Copied GRUB menu from PBE to ABE
No entry for BE <new-zfsbe> in GRUB menu
Population of boot environment <new-zfsbe> successful.
Creation of boot environment <new-zfsbe> successful. 
# lustatus
Boot Environment           Is       Active Active    Can    Copy
Name                       Complete Now    On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
c0t0d0                     yes      yes    yes       no     -
zfsBE                            yes      no     no        yes    -
new-zfsbe                  yes      no     no        yes    -

# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
newPool                               7.15G  11.6G    36K  /newPool
newPool/ROOT                          4.05G  11.6G    31K  legacy
newPool/ROOT/new-zfsbe                4.05G  11.6G  4.05G  /
newPool/dump                          1.03G  12.6G    16K  -
newPool/swap                          2.06G  13.6G    16K  -
rpool                                             11.4G  2.95G    31K  /rpool
rpool/ROOT                                        4.34G  2.95G    31K  legacy
rpool/ROOT/zfsBE                                  4.34G  2.95G  4.34G  /
rpool/dump                                        2.06G  5.02G    16K  -
rpool/swap                                        5.04G  7.99G    16K  -