JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

Part I Upgrading With Live Upgrade

1.  Where to Find Oracle Solaris Installation Planning Information

2.  Live Upgrade (Overview)

3.  Live Upgrade (Planning)

4.  Using Live Upgrade to Create a Boot Environment (Tasks)

5.  Upgrading With Live Upgrade (Tasks)

6.  Failure Recovery: Falling Back to the Original Boot Environment (Tasks)

7.  Maintaining Live Upgrade Boot Environments (Tasks)

8.  Upgrading the Oracle Solaris OS on a System With Non-Global Zones Installed

9.  Live Upgrade Examples

Part II Upgrading and Migrating With Live Upgrade to a ZFS Root Pool

10.  Live Upgrade and ZFS (Overview)

11.  Live Upgrade for ZFS (Planning)

12.  Creating a Boot Environment for ZFS Root Pools

Migrating a UFS File System to a ZFS File System

How to Migrate a UFS File System to a ZFS File System

Creating a Boot Environment Within the Same ZFS Root Pool

How to Create a ZFS Boot Environment Within the Same ZFS Root Pool

Creating a Boot Environment In a New Root Pool

How to Create a Boot Environment on a New ZFS Root Pool

Creating a Boot Environment From a Source Other Than the Currently Running System

Falling Back to a ZFS Boot Environment

13.  Live Upgrade for ZFS With Non-Global Zones Installed

Part III Appendices

A.  Live Upgrade Command Reference

B.  Troubleshooting (Tasks)

C.  Additional SVR4 Packaging Requirements (Reference)

D.  Using the Patch Analyzer When Upgrading (Tasks)

Glossary

Index

Migrating a UFS File System to a ZFS File System

This procedure describes how to migrate a UFS file system to a ZFS file system. Creating a boot environment provides a method of copying critical file systems from an active UFS boot environment to a ZFS root pool. The lucreate command copies the critical file systems to a new boot environment within an existing ZFS root pool. User-defined (shareable) file systems are not copied and are not shared with the source UFS boot environment. Also, /swap is not shared between the UFS file system and ZFS root pool. For an overview of critical and shareable file systems, see File System Types.

How to Migrate a UFS File System to a ZFS File System


Note - To migrate an active UFS root (/) file system to a ZFS root pool, you must provide the name of the root pool. The critical file systems are copied into the root pool.


  1. Before running Live Upgrade for the first time, you must install the latest Live Upgrade packages from installation media and install the patches listed in the My Oracle Support knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844). Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on the My Oracle Support web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the My Oracle Support knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844).


    Note - Using Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Live Upgrade software to perform the tasks.


    1. Become superuser or assume an equivalent role.

      Note - Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.


    2. From the My Oracle Support web site, follow the instructions in knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) to remove and add Live Upgrade packages.

      The three Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.

      # pkgrm SUNWlucfg SUNWluu SUNWlur
    3. Install the new Live Upgrade packages from the release to which you are upgrading. For instructions, see  Installing Live Upgrade.
    4. Before running Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting My Oracle Support. Search for the knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844) on the My Oracle Support web site.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the My Oracle Support web site, obtain the list of patches.

      • Change to the patch directory.

        # cd /var/tmp/lupatches
      • Install the patches with the patchadd command.

        # patchadd patch_id

        patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note - The patches need to be applied in the order that is specified in knowledge document 1004881.1 - Live Upgrade Software Patch Requirements (formerly 206844).


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Live Upgrade fails.

        # init 6

        You now have the packages and patches necessary for a successful migration.

  2. Create a ZFS root pool.

    The ZFS root pool must be on a single slice to be bootable and upgradeable.

    # zpool create rpool c0t1d0s5
    rpool

    Specifies the name of the new ZFS root pool to be created.

    c0t1d0s5

    Creates the new root pool on the disk slice, c0t1d0s5.

    For information about creating a new root pool, see the Oracle Solaris ZFS Administration Guide.

  3. Migrate your UFS root (/) file system to the new ZFS root pool.
    # lucreate [-c ufsBE] -n new-zfsBE -p rpool
    ufsBE

    The name for the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.

    new-zfsBE

    The name for the boot environment to be created. The name must be unique on the system.

    -p rpool

    Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.

    The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  4. (Optional) Verify that the boot environment is complete.
    # lustatus
    boot environment   Is         Active   Active     Can        Copy 
    Name               Complete   Now      OnReboot   Delete     Status 
    -----------------------------------------------------------------
    ufsBE               yes       yes      yes        no         -
    new-zfsBE           yes       no       no        yes         -
  5. (Optional) Verify the basic dataset information on the system.

    The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.

    # zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    rpool                                 11.4G  2.95G    31K  /rpool
    rpool/ROOT                            4.34G  2.95G    31K  legacy
    rpool/ROOT/new-zfsBE                  4.34G  2.95G  4.34G  /
    rpool/dump                            2.06G  5.02G    16K  -
    rpool/swap                            5.04G  7.99G    16K  -

    The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.

    You can now upgrade and activate the new boot environment.

Example 12-1 Migrating a UFS Root (/) File System to a ZFS Root Pool

In this example, the new ZFS root pool, rpool, is created on a separate slice, C0t0d0s4. The lucreate command migrates the currently running UFS boot environment,c0t0d0, to the new ZFS boot environment, new-zfsBE, and places the new boot environment in rpool.

# zpool create rpool C0t0d0s4

# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool
# lucreate -c c0t0d0 -n new-zfsBE -p rpool
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <c0t0d0>.
Creating initial configuration for primary boot environment <c0t0d0>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot 
environment; cannot get BE ID.
PBE configuration successful: PBE name <c0t0d0> PBE Boot Device 
</dev/dsk/c1t0d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s6> is not a root device for any boot 
environment; cannot get BE ID.
Creating configuration for boot environment <new-zfsBE>.
Source boot environment is <c0t0d0>.
Creating file systems on boot environment <new-zfsBE>.
Creating <zfs> file system for </> in zone <global> on 
<rpool/ROOT/new-zfsBE>.
Populating file systems on boot environment <new-zfsBE>.
Analyzing zones.
Mounting ABE <new-zfsBE>.
Generating file list.
Copying data from PBE <c0t0d0> to ABE <new-zfsBE>.
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <new-zfsBE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <c0t0d0>.
Making boot environment <new-zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-Cjh.mnt
updating /.alt.tmp.b-Cjh.mnt/platform/sun4u/boot_archive
Population of boot environment <new-zfsBE> successful.
Creation of boot environment <new-zfsBE> successful.


# lustatus
boot environment   Is         Active   Active     Can        Copy 
Name               Complete   Now      OnReboot   Delete     Status 
------------------------------------------------------------------------ 
c0t0d0             yes       yes      yes        no         - 
new-zfsBE           yes       no       no        yes       -

# zfs list
NAME                               USED  AVAIL  REFER  MOUNTPOINT
rpool                                 11.4G  2.95G    31K  /rpool
rpool/ROOT                            4.34G  2.95G    31K  legacy
rpool/ROOT/new-zfsBE                  4.34G  2.95G  4.34G  /
rpool/dump                            2.06G  5.02G    16K  -
rpool/swap                            5.04G  7.99G    16K  -

You can now upgrade or activate the new boot environment.

In this example, the new boot environment is upgraded by using the luupgrade command from an image that is stored in the location indicated with the -s option.

# luupgrade -n zfsBE -u -s /net/install/export/s10/combined.s10
 51135 blocks 
miniroot filesystem is <lofs>
Mounting miniroot at 
</net/install/export/solaris_10/combined.solaris_10_wos
/Solaris_10/Tools/Boot> 
Validating the contents of the media 
</net/install/export/s10/combined.s10>. 
The media is a standard Solaris media. 
The media contains an operating system upgrade image. 
The media contains Solaris version <10_1008>. 
Constructing upgrade profile to use. 
Locating the operating system upgrade program. 
Checking for existence of previously scheduled Live 
Upgrade requests. 
Creating upgrade profile for BE <zfsBE>. 
Determining packages to install or upgrade for BE <zfsBE>. 
Performing the operating system upgrade of the BE <zfsBE>. 
CAUTION: Interrupting this process may leave the boot environment 
unstable or unbootable. 
Upgrading Solaris: 100% completed 
Installation of the packages from this media is complete. 
Adding operating system patches to the BE <zfsBE>. 
The operating system patch installation is complete. 
INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot 
environment <zfsBE> contains a log of the upgrade operation. 
INFORMATION: The file var/sadm/system/data/upgrade_cleanup on boot 
environment <zfsBE> contains a log of cleanup operations required. 
INFORMATION: Review the files listed above. Remember that all 
of the files are located on boot environment <zfsBE>. 
Before you activate boot environment <zfsBE>, determine if any 
additional system maintenance is required or if additional media 
of the software distribution must be installed. 
The Solaris upgrade of the boot environment <zfsBE> is complete.

The new boot environment can be activated anytime after it is created.

# luactivate new-zfsBE
A Live Upgrade Sync operation will be performed on startup of boot 
environment <new-zfsBE>.

**********************************************************************

The target boot environment has been activated. It will be used when you
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You
MUST USE either the init or the shutdown command when you reboot. If you
do not use either init or shutdown, the system will not boot using the
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following 
process
needs to be followed to fallback to the currently working boot 
environment:

1. Enter the PROM monitor (ok prompt).

2. Boot the machine to Single User mode using a different boot device
(like the Solaris Install CD or Network). Examples:

   At the PROM monitor (ok prompt):
   For boot to Solaris CD:  boot cdrom -s
   For boot to network:     boot net -s

3. Mount the Current boot environment root slice to some directory (like
/mnt). You can use the following command to mount:

   mount -Fufs /dev/dsk/c1t0d0s0 /mnt

4. Run <luactivate> utility with out any arguments from the current boot
environment root slice, as shown below:

   /mnt/sbin/luactivate

5. luactivate, activates the previous working boot environment and
indicates the result.

6. Exit Single User mode and reboot the machine.

**********************************************************************

Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.

# init 6
# svc.startd: The system is coming down.  Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

If you fall back to the UFS boot environment, then you need to import again any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following example when you switch back to the UFS boot environment.

# luactivate c0t0d0
WARNING: The following files have changed on both the current boot 
environment <new-zfsBE> zone <global> and the boot environment 
to be activated <c0t0d0>:
 /etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current 
boot environment <zfsBE> zone <global> and the boot environment to be 
activated <c0t0d0>. These files will not be automatically synchronized 
from the current boot environment <new-zfsBE> when boot environment <c0t0d0>