Solaris 10 5/09 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Part II Upgrading and Migrating With Solaris Live Upgrade to a ZFS Root Pool

This part provides an overview and instructions for using Solaris Live Upgrade to create and upgrade an inactive boot environment on ZFS storage pools. Also, you can migrate your UFS root (/) file system to a ZFS root pool.

Chapter 11 Solaris Live Upgrade and ZFS (Overview)

With Solaris Live Upgrade, you can migrate your UFS file systems to a ZFS root pool and create ZFS root file systems from an existing ZFS root pool.


Note –

Creating boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. When performing a Solaris Live Upgrade for a UFS file system, both the command-line parameters and operation of Solaris Live Upgrade remain unchanged. To perform a Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.


The following sections provide an overview of these tasks:

Introduction to Using Solaris Live Upgrade With ZFS

If you have a UFS file system, Solaris Live Upgrade works the same as in previous releases. You can now migrate from UFS file systems to a ZFS root pool and create new boot environments within a ZFS root pool. For these tasks, the lucreate command has been enhanced with the -p option. The command syntax is the following:


# lucreate [-c active_BE_name] -n BE_name [-p zfs_root_pool]

The -p option specifies the ZFS pool in which a new boot environment resides. This option can be omitted if the source and target boot environments are within the same pool.

The lucreate command -m option is not supported with ZFS. Other lucreate command options work as usual, with some exceptions. For limitations, see System Requirements and Limitations When Using Solaris Live Upgrade.

Migrating From a UFS File System to a ZFS Root Pool

If you create a boot environment from the currently running system, the lucreate command copies the UFS root (/) file system to a ZFS root pool. The copy process might take time, depending on your system.

When you are migrating from a UFS file system, the source boot environment can be a UFS root (/) file system on a disk slice. You cannot create a boot environment on a UFS file system from a source boot environment on a ZFS root pool.

Migrating From a UFS root (/) File System to ZFS Root Pool

The following commands create a ZFS root pool and a new boot environment from a UFS root (/) file system in the ZFS root pool. A ZFS root pool must exist before the lucreate operation and must be created with slices rather than whole disks to be upgradeable and bootable. The disk cannot have an EFI label, but must be an SMI label. For more limitations, see System Requirements and Limitations When Using Solaris Live Upgrade.

Figure 11–1 shows the zpool command that creates a root pool, rpool on a separate slice, c0t1d0s5. The disk slice c0t0d0s0 contains a UFS root (/) file system. In the lucreate command, the -c option names the currently running system, c0t0d0, that is a UFS root (/) file system. The -n option assigns the name to the boot environment to be created, new-zfsBE. The -p option specifies where to place the new boot environment, rpool. The UFS /export file system and the /swap volume are not copied to the new boot environment.

Figure 11–1 Migrating From a UFS File System to a ZFS Root Pool

The context describes the illustration.


Example 11–1 Migrating From a UFS root (/) File System to ZFS Root Pool

This example shows the same commands as in Figure 11–1. The commands create a new root pool, rpool, and create a new boot environment in the pool from a UFS root (/) file system. In this example, the zfs list command shows the ZFS root pool created by the zpool command. The next zfs list command shows the datasets created by the lucreate command.


# zpool create rpool c0t1d0s5
# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool

# lucreate -c c0t0d0 -n new-zfsBE -p rpool
# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool
rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
rpool/ROOT/new-zfsBE       5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
rpool/dump                 1.95G      -  1.95G  - 
rpool/swap                 1.95G      -  1.95G  - 

The new boot environment is rpool/ROOT/new-zfsBE. The boot environment, new-zfsBE, is ready to be upgraded and activated.


Migrating a UFS File System With Solaris Volume Manager Volumes Configured to a ZFS Root File System

You can migrate your UFS file system if your system has Solaris Volume Manager (SVM) volumes. To create a UFS boot environment from an existing SVM configuration, you create a new boot environment from your currently running system. Then create the ZFS boot environment from the new UFS boot environment.

Overview of Solaris Volume Manager (SVM). ZFS uses the concept of storage pools to manage physical storage. Historically, file systems were constructed on top of a single physical device. To address multiple devices and provide for data redundancy, the concept of a volume manager was introduced to provide the image of a single device. Thus, file systems would not have to be modified to take advantage of multiple devices. This design added another layer of complexity. This complexity ultimately prevented certain file system advances because the file system had no control over the physical placement of data on the virtualized volumes.

ZFS storage pools replace SVM. ZFS completely eliminates the volume management. Instead of forcing you to create virtualized volumes, ZFS aggregates devices into a storage pool. The storage pool describes such physical characteristics of storage device layout and data redundancy and acts as an arbitrary data store from which file systems can be created. File systems are no longer constrained to individual devices, enabling them to share space with all file systems in the pool. You no longer need to predetermine the size of a file system, as file systems grow automatically within the space allocated to the storage pool. When new storage is added, all file systems within the pool can immediately use the additional space without additional work. In many ways, the storage pool acts as a virtual memory system. When a memory DIMM is added to a system, the operating system doesn't force you to invoke some commands to configure the memory and assign it to individual processes. All processes on the system automatically use the additional memory.


Example 11–2 Migrating From a UFS root (/) File System With SVM Volumes to ZFS Root Pool

When migrating a system with SVM volumes, the SVM volumes are ignored. You can set up mirrors within the root pool as in the following example.

In this example, the lucreate command with the -m option creates a new boot environment from the currently running system. The disk slice c1t0d0s0 contains a UFS root (/) file system configured with SVM volumes. The zpool command creates a root pool, c1t0d0s0, and a RAID-1 volume (mirror), c2t0d0s0. In the second lucreate command, the -n option assigns the name to the boot environment to be created, c0t0d0s0. The -s option, identifies the UFS root (/) file system. The -p option specifies where to place the new boot environment, rpool.


# lucreate -n ufsBE -m /:/dev/md/dsk/d104:ufs
# zpool create rpool mirror c0t0d0s0 c0t1d0s0
# lucreate -n c0t0d0s0 -s ufsBE -p zpool

The boot environment, c0t0d0s0, is ready to be upgraded and activated.


Creating a New Boot Environment From a ZFS Root Pool

You can either create a new ZFS boot environment within the same root pool or on a new root pool. This section contains the following overviews:

Creating a New Boot Environment Within the Same Root Pool

When creating a new boot environment within the same ZFS root pool, the lucreate command creates a snapshot from the source boot environment and then a clone is made from the snapshot. The creation of the snapshot and clone is almost instantaneous and the disk space used is minimal. The amount of space ultimately required depends on how many files are replaced as part of the upgrade process. The snapshot is read-only, but the clone is a read-write copy of the snapshot. Any changes made to the clone boot environment are not reflected in either the snapshot or the source boot environment from which the snapshot was made.


Note –

As data within the active dataset changes, the snapshot consumes space by continuing to reference the old data. As a result, the snapshot prevents the data from being freed back to the pool. For more information about snapshots, see Chapter 7, Working With ZFS Snapshots and Clones, in Solaris ZFS Administration Guide.


When the current boot environment resides on the same ZFS pool, the -p option is omitted.

Figure 11–2 shows the creation of a ZFS boot environment from a ZFS root pool. The slice c0t0d0s0 contains a the ZFS root pool, rpool. In the lucreate command, the -n option assigns the name to the boot environment to be created, new-zfsBE. A snapshot of the original root pool is created rpool@new-zfsBE. The snapshot is used to make the clone that is a new boot environment, new-zfsBE. The boot environment, new-zfsBE, is ready to be upgraded and activated.

Figure 11–2 Creating a New Boot Environment on the Same Root Pool

The context describes the illustration.


Example 11–3 Creating a Boot Environment Within the Same ZFS Root Pool

This example shows the same command as in Figure 11–2 that creates a new boot environment in the same root pool. The lucreate command names the currently running boot environment with the -c zfsBE option, and the -n new-zfsBE creates the new boot environment. The zfs list command shows the ZFS datasets with the new boot environment and snapshot.


# lucreate -c zfsBE -n new-zfsBE
# zfs list
AME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool
rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
rpool/ROOT/zfsBE           5.38G  57.6G   551M  
rpool/ROOT/zfsBE@new-zfsBE 66.5K      -   551M  -
rpool/ROOT/new-zfsBE       5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
rpool/dump                 1.95G      -  1.95G  - 
rpool/swap                 1.95G      -  1.95G  - 

Creating a New Boot Environment on Another Root Pool

You can use the lucreate command to copy an existing ZFS root pool into another ZFS root pool. The copy process might take some time depending on your system.

Figure 11–3 shows the zpool command that creates a ZFS root pool, rpool2, on c0t1d0s5 because a bootable ZFS root pool does not yet exist. The lucreate command with the -n option assigns the name to the boot environment to be created, new-zfsBE. The -p option specifies where to place the new boot environment.

Figure 11–3 Creating a New Boot Environment on Another Root Pool

The context describes the illustration.


Example 11–4 Creating a Boot Environment on a Different ZFS Root Pool

This example shows the same commands as in Figure 11–3 that create a new root pool and then a new boot environment in the newly created root pool. In this example, the zpool create command creates rpool2. The zfs list command shows that no ZFS datasets are created in rpool2. The datasets are created with the lucreate command.


# zpool create rpool2 c0t2d0s5
# zfs list
NAME                             USED    AVAIL   REFER   MOUNTPOINT 
rpool2                           9.29G    57.6G     20K   /rpool2 
rpool                            9.29G    57.6G     20K   /.new.lulib.rs.109262
rpool/ROOT                       5.46G    57.6G     18K   legacy
rpool/ROOT/zfsBE                 5.46G    57.6G   551M  
rpool/dump                       3.99G        -   3.99G   - 
rpool/swap                       3.99G        -   3.99G   - 

The new ZFS root pool, rpool2, is created on disk slice c0t2d0s5.


# lucreate -n new-zfsBE -p rpool2
# zfs list
NAME                             USED    AVAIL   REFER   MOUNTPOINT 
rpool2                           9.29G    57.6G     20K   /rpool2 
rpool2/ROOT/                     5.38G    57.6G     18K   /rpool2/ROOT 
rpool2/ROOT/new-zfsBE            5.38G    57.6G    551M   /tmp/.new.luupdall.109859
rpool2/dump                      3.99G        -   3.99G   - 
rpool2/swap                      3.99G        -   3.99G   - 
rpool                            9.29G    57.6G     20K   /.new.lulib.rs.109262
rpool/ROOT                       5.46G    57.6G     18K   legacy
rpool/ROOT/zfsBE                 5.46G    57.6G   551M  
rpool/dump                       3.99G        -   3.99G   - 
rpool/swap                       3.99G        -   3.99G   - 

The new boot environment, new-zfsBE, is created on rpool2 along with the other datasets, ROOT, dump and swap. The boot environment, new-zfsBE, is ready to be upgraded and activated.


Creating a New Boot Environment From a Source Other Than the Currently Running System

If you are creating a boot environment from a source other than the currently running system, you must use the lucreate command with the -s option. The -s option works the same as for a UFS file system. The -s option provides the path to the alternate root (/) file system. This alternate root (/) file system is the source for the creation of the new ZFS root pool. The alternate root can be either a UFS (/) root file system or a ZFS root pool. The copy process might take time, depending on your system.


Example 11–5 Creating a Boot Environment From an Alternate Root (/) File System

The following command creates a new ZFS root pool from an existing ZFS root pool. The -n option assigns the name to the boot environment to be created, new-zfsBE. The -s option specifies the boot environment, source-zfsBE, to be used as the source of the copy instead of the currently running boot environment. The -p option specifies to place the new boot environment in newpool2.


# lucreate -n new-zfsBE  -s source-zfsBE -p rpool2

The boot environment, new-zfsBE, is ready to be upgraded and activated.


Creating a ZFS Boot Environment on a System With Non-Global Zones Installed

You can use Solaris Live Upgrade to migrate your non-global zones to a ZFS root file system. For overview, planning and step-by-step procedures, see Chapter 14, Solaris Live Upgrade For ZFS With Non-Global Zones Installed.

Additional Resources

For additional information about the topics included in this chapter, see the resources listed in Table 11–1.

Table 11–1 Additional Resources

Resource 

Location 

For ZFS information, including overview, planning, and step-by-step instructions 

Solaris ZFS Administration Guide

For using Solaris Live Upgrade on a system with UFS file systems 

Part I, Upgrading With Solaris Live Upgrade of this book

Chapter 12 Solaris Live Upgrade for ZFS (Planning)

This chapter provides guidelines and requirements for review before performing a migration of a UFS file system to a ZFS file system or before creating a new ZFS boot environment from an existing ZFS root pool.


Note –

Creating boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. When you perform a Solaris Live Upgrade for a UFS file system, both the command-line parameters and operation of Solaris Live Upgrade remain unchanged. To perform a Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.


System Requirements and Limitations When Using Solaris Live Upgrade

Be sure that you have read and understand the following requirements and limitations before performing a migration of a UFS file system to a ZFS file system or before creating a new ZFS boot environment from an existing ZFS root pool. These requirements are in addition to the requirements listed in Chapter 6, ZFS Root File System Installation (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

Table 12–1 Requirements and Limitations

Requirement or Limitation 

Description 

Information 

You must have at the least the Solaris 10 10/08 release installed. 

Migrating from a UFS file system to a ZFS root pool with Solaris Live Upgrade or creating a new boot environment in a root pool is new in the Solaris 10 10/08 release. This release contains the software needed to use Solaris Live Upgrade with ZFS. You must have at least this release installed to use ZFS.

 

Disk space 

The minimum amount of available pool space for a bootable ZFS root file system depends on the amount of physical memory, the disk space available, and the number of boot environments to be created.  

For an explanation, see Disk Space Requirements for a ZFS Installation in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade.

When you migrate from a UFS root (/) file system to a ZFS root pool, consider these requirements.

  • Migration is possible only from a UFS file system to a ZFS file system.

    • File systems other than a UFS file system cannot be migrated to a ZFS root pool.

    • A UFS file system cannot be created from a ZFS root pool.

  • Before migrating, a ZFS storage pool must exist.

  • The ZFS storage pool must be created with slices rather than whole disks to be upgradeable and bootable.

    • The pool created with slices can be mirrored, but not a RAID-Z or non-redundant configuration of multiple disks. The SVM device information must be already available in the /dev/md/[r]dsk directory.

    • The pool must have an SMI label. An EFI-labeled disk cannot be booted.

    • x86 only: The ZFS pool must be in a slice with an fdisk partition.

When you migrate shared file systems, shared file systems cannot be copied to a separate slice on the new ZFS root pool. 

For example, when performing a Solaris Live Upgrade with a UFS root (/) file system, you can use the -m option to copy the /export file system to another device. You do not have the -m option of copying the shared file system to a ZFS pool.

 

When you are migrating a UFS root file system that contains non-global zones, shared file systems are not migrated. 

On a system with a UFS root (/) file system and non-global zones installed, the non-global zones are migrated if the zone is in a critical file system as part of the UFS to ZFS migration. Or, the zone is cloned when you upgrade within the same ZFS pool. If a non-global zone exists in a shared UFS (/) file system, to migrate to a ZFS root pool, you must first upgrade the zone, as in previous Solaris releases.

Do not use the ZFS rename command.

The Solaris Live Upgrade feature is unaware of the name change and subsequent commands, such as ludelete, will fail. In fact, do not rename your ZFS pools or file systems if you have existing boot environments that you want to continue to use.

 

Set dataset properties before the lucreate command is used.

Solaris Live Upgrade creates the datasets for the boot environment and ZFS volumes for the swap area and dump device but does not account for any existing dataset property modifications. This means that if you want a dataset property enabled in the new boot environment, you must set the property before the lucreate operation. For example:


# zfs set compression=on rpool/ROOT

See Introducing ZFS Properties in Solaris ZFS Administration Guide.

When creating a ZFS boot environment within the same ZFS root pool, you cannot use the lucreate command include and exclude options to customize the content.

You cannot use the -f, -o, -y, -Y, and -z options to include or exclude files from the primary boot environment when creating a boot environment in the same ZFS root pool. However, you can use these options in the following cases:

  • Creating a boot environment from a UFS file system to a UFS file system

  • Creating a boot environment from a UFS file system to a ZFS root pool

  • Creating a boot environment from a ZFS root pool to a different ZFS root pool

For information about using the include and exclude options, see To Create a Boot Environment and Customize the Content.

You cannot use Solaris Live Upgrade to upgrade non-root ZFS file systems. 

   

Additional Resources

For additional information about the topics included in this chapter, see the resources listed in Table 12–2.

Table 12–2 Additional Resources

Resource 

Location 

For more information on Planning a ZFS installation 

Chapter 6, ZFS Root File System Installation (Planning), in Solaris 10 5/09 Installation Guide: Planning for Installation and Upgrade

For ZFS information, including overview, planning, and step-by-step instructions 

Solaris ZFS Administration Guide

For using Solaris Live Upgrade on a system with UFS file systems 

Part I, Upgrading With Solaris Live Upgrade of this book

Chapter 13 Creating a Boot Environment for ZFS Root Pools

This chapter provides step-by-step procedures on how to create a ZFS boot environment when you use Solaris Live Upgrade.


Note –

Migrating from a UFS file system to a ZFS root pool or creating ZFS boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. To use Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.


This chapter provides procedures for the following tasks:

For procedures on using ZFS when non-global zones are installed, see Chapter 14, Solaris Live Upgrade For ZFS With Non-Global Zones Installed.

Migrating a UFS File System to a ZFS File System

This procedure describes how to migrate a UFS file system to a ZFS file system. Creating a boot environment provides a method of copying critical file systems from an active UFS boot environment to a ZFS root pool. The lucreate command copies the critical file systems to a new boot environment within an existing ZFS root pool. User-defined (shareable) file systems are not copied and are not shared with the source UFS boot environment. Also, /swap is not shared between the UFS file system and ZFS root pool. For an overview of critical and shareable file systems, see File System Types.

ProcedureHow to Migrate a UFS File System to a ZFS File System


Note –

To migrate an active UFS root (/) file system to a ZFS root pool, you must provide the name of the root pool. The critical file systems are copied into the root pool.


  1. Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve info doc 206844. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the SunSolve info doc 206844.


    Note –

    Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.


    1. Become superuser or assume an equivalent role.

    2. From the SunSolve web site, follow the instructions in info doc 206844 to remove and add Solaris Live Upgrade packages.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    3. Install the new Solaris Live Upgrade packages from the release to which you are upgrading. For instructions, see  Installing Solaris Live Upgrade.

    4. Before running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the SunSolve web site, obtain the list of patches.

      • Change to the patch directory.


        # cd /var/tmp/lupatches
        
      • Install the patches with the patchadd command.


        # patchadd patch_id
        

        patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note –

        The patches need to be applied in the order that is specified in info doc 206844.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


        # init 6
        

        You now have the packages and patches necessary for a successful migration.

  2. Create a ZFS root pool.

    The ZFS root pool must be on a single slice to be bootable and upgradeable.


    # zpool create rpool  c0t1d0s5
    
    rpool

    Specifies the name of the new ZFS root pool to be created.

    c0t1d0s5

    Creates the new root pool on the disk slice, c0t1d0s5.

    For information about creating a new root pool, see the Solaris ZFS Administration Guide.

  3. Migrate your UFS root (/) file system to the new ZFS root pool.


    # lucreate [-c ufsBE] -n new-zfsBE -p rpool
    
    -c ufsBE

    Assigns the name ufsBE to the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.

    -n new-zfsBE

    Assigns the name new-zfsBE to the boot environment to be created. The name must be unique on the system.

    -p rpool

    Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.

    The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  4. (Optional) Verify that the boot environment is complete.

    In this example, the lustatus command reports whether the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is         Active   Active     Can	    Copy 
    Name               Complete   Now	  OnReboot   Delete	 Status 
    -----------------------------------------------------------------
    ufsBE               yes       yes      yes        no         -
    new-zfsBE           yes       no       no        yes         -
  5. (Optional) Verify the basic dataset information on the system.

    The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.


    # zfs list
    NAME                        USED  AVAIL  REFER  MOUNTPOINT 
    rpool                      9.29G  57.6G    20K  /rpool
    rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
    rpool/ROOT/new-zfsBE       5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
    rpool/dump                 1.95G      -  1.95G  - 
    rpool/swap                 1.95G      -  1.95G  - 

    The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.

    You can now upgrade and activate the new boot environment. See Example 13–1.


Example 13–1 Migrating a UFS Root (/) File System to a ZFS Root Pool

In this example, the new ZFS root pool, rpool, is created on a separate slice, C0t0d0s4. The lucreate command migrates the currently running UFS boot environment,c0t0d0, to the new ZFS boot environment, new-zfsBE, and places the new boot environment in rpool.


# zpool create rpool C0t0d0s4

# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool
# lucreate -c c0t0d0 -n new-zfsBE -p rpool
Analyzing system configuration.
Current boot environment is named <c0t0d0>.
Creating initial configuration for primary boot environment <c0t0d0>.
The device </dev/dsk/c0t0d0> is not a root device for any boot 
environment; cannot get BE ID.
PBE configuration successful: PBE name <c0t0d0> PBE Boot Device 
</dev/dsk/c0t0d0>.
Comparing source boot environment <c0t0d0> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot 
environment; cannot get BE ID.
Creating configuration for boot environment <new-zfsBE>.
Source boot environment is <c0t0d0>.
Creating boot environment <new-zfsBE>.
Creating file systems on boot environment <new-zfsBE>.
Creating <zfs> file system for </> in zone <global> on 
<rpool/ROOT/new-zfsBE>.
Populating file systems on boot environment <new-zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-cBc.mnt
updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive
Population of boot environment <new-zfsBE> successful.
Creation of boot environment <new-zfsBE> successful.

# lustatus
boot environment   Is         Active   Active     Can	    Copy 
Name               Complete   Now	  OnReboot   Delete	 Status 
------------------------------------------------------------------------ 
c0t0d0             yes       yes      yes        no         - 
new-zfsBE           yes       no       no        yes       -

# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT 
rpool                      9.29G  57.6G    20K  /rpool
rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
rpool/ROOT/zfsBE           5.38G  57.6G   551M  
rpool/ROOT/new-zfsBE       5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
rpool/dump                 1.95G      -  1.95G  - 
rpool/swap                 1.95G      -  1.95G  - 

You can now upgrade or activate the new boot environment.

In this example, the new boot environment is upgraded by using the luupgrade command from an image that is stored in the location indicated with the -s option.


# luupgrade -n zfsBE -u -s /net/install/export/s10/combined.s10
 51135 blocks 
miniroot filesystem is <lofs>
Mounting miniroot at 
</net/install/export/solaris_10/combined.solaris_10_wos
/Solaris_10/Tools/Boot> 
Validating the contents of the media 
</net/install/export/s10/combined.s10>. 
The media is a standard Solaris media. 
The media contains an operating system upgrade image. 
The media contains Solaris version <10_1008>. 
Constructing upgrade profile to use. 
Locating the operating system upgrade program. 
Checking for existence of previously scheduled Live 
Upgrade requests. 
Creating upgrade profile for BE <zfsBE>. 
Determining packages to install or upgrade for BE <zfsBE>. 
Performing the operating system upgrade of the BE <zfsBE>. 
CAUTION: Interrupting this process may leave the boot environment 
unstable or unbootable. 
Upgrading Solaris: 100% completed 
Installation of the packages from this media is complete. 
Adding operating system patches to the BE <zfsBE>. 
The operating system patch installation is complete. 
INFORMATION: The file /var/sadm/system/logs/upgrade_log on boot 
environment <zfsBE> contains a log of the upgrade operation. 
INFORMATION: The file var/sadm/system/data/upgrade_cleanup on boot 
environment <zfsBE> contains a log of cleanup operations required. 
INFORMATION: Review the files listed above. Remember that all 
of the files are located on boot environment <zfsBE>. 
Before you activate boot environment <zfsBE>, determine if any 
additional system maintenance is required or if additional media 
of the software distribution must be installed. 
The Solaris upgrade of the boot environment <zfsBE> is complete.

The new boot environment can be activated anytime after it is created.


# luactivate new-zfsBE
**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a

3. Boot to the original boot environment by typing:

     boot

**********************************************************************

Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.


# init 6
# svc.startd: The system is coming down.  Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

If you fall back to the UFS boot environment, then you need to import again any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following example when you switch back to the UFS boot environment.


# luactivate c0t0d0
WARNING: The following files have changed on both the current boot 
environment <new-zfsBE> zone <global> and the boot environment 
to be activated <c0t0d0>:
 /etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current 
boot environment <zfsBE> zone <global> and the boot environment to be 
activated <c0t0d0>. These files will not be automatically synchronized 
from the current boot environment <new-zfsBE> when boot environment <c0t0d0>

Creating a Boot Environment Within the Same ZFS Root Pool

If you have an existing ZFS root pool and want to create a new ZFS boot environment within that pool, the following procedure provides the steps. After the inactive boot environment is created, the new boot environment can be upgraded and activated at your convenience. The -p option is not required when you create a boot environment within the same pool.

ProcedureHow to Create a ZFS Boot Environment Within the Same ZFS Root Pool

  1. Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve info doc 206844. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the SunSolve info doc 206844.


    Note –

    Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.


    1. Become superuser or assume an equivalent role.

    2. From the SunSolve web site, follow the instructions in info doc 206844 to remove and add Solaris Live Upgrade packages.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


      Note –

      The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a previous release, you do not need to remove this package.



      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    3. Install the new Solaris Live Upgrade packages. For instructions, see  Installing Solaris Live Upgrade.

    4. Before running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the SunSolve web site, obtain the list of patches.

      • Change to the patch directory as in this example.


        # cd /var/tmp/lupatches
        
      • Install the patches with the patchadd command.


        # patchadd path-to-patches patch_id patch_id
        

        path-to-patches is the patch to the patch directory such as /var/tmp/lupatches. patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note –

        The patches need to be applied in the order that is specified in info doc 206844.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


        # init 6
        

        You now have the packages and patches necessary for a successful creation of a new boot environment.

  2. Create the new boot environment.


    # lucreate [-c zfsBE] -n new-zfsBE
    
    -c zfsBE

    Assigns the name zfsBE to the current boot environment. This option is not required and is used only when the first boot environment is created. If you run lucreate for the first time and you omit the -c option, the software creates a default name for you.

    -n new-zfsBE

    Assigns the name to the boot environment to be created. The name must be unique on the system.

    The creation of the new boot environment is almost instantaneous. A snapshot is created of each dataset in the current ZFS root pool, and a clone is then created from each snapshot. Snapshots are very disk-space efficient, and this process uses minimal disk space. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  3. (Optional) Verify that the boot environment is complete.

    The lustatus command reports whether the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy 
    Name               Complete  Now	 OnReboot   Delete	 Status 
    ------------------------------------------------------------------------ 
    zfsBE               yes       yes     yes         no             -
    new-zfsBE           yes       no      no          yes            -
  4. (Optional) Verify the basic dataset information on the system.

    In this example, the ZFS root pool is named rpool, and the @ symbol indicates a snapshot. The new boot environment mount points are temporary until the luactivate command is executed. The /dump and /swap volumes are shared with the ZFS root pool and boot environments within the root pool.


    # zfs list
    NAME                                      USED  AVAIL  REFER  MOUNTPOINT 
    rpool                                    9.29G  57.6G    20K  /rpool 
    rpool/ROOT                               5.38G  57.6G    18K  /rpool/ROOT 
    rpool/ROOT/zfsBE                         5.38G  57.6G   551M  
    rpool/ROOT/zfsBE@new-zfsBE               66.5K      -   551M  -
    rpool/ROOT/new-zfsBE                     85.5K  57.6G   551M  /tmp/.alt.103197
    rpool/dump                               1.95G      -  1.95G  - 
    rpool/swap                               1.95G      -  1.95G  - 

    You can now upgrade and activate the new boot environment. See Example 13–2.


Example 13–2 Creating a Boot Environment Within the Same ZFS Root Pool

The following commands create a new ZFS boot environment, new-zfsBE. The -p option is not required because the boot environment is being created within the same root pool.


# lucreate [-c zfsBE] -n new-zfsBE
Analyzing system configuration.
Comparing source boot environment <zfsBE> file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Creating configuration for boot environment new-zfsBE.
Source boot environment is zfsBE.
Creating boot environment new-zfsBE.
Cloning file systems from boot environment zfsBE to create 
boot environment new-zfsBE.
Creating snapshot for <rpool> on <rpool> Creating clone for <rpool>. 
Setting canmount=noauto for <rpool> in zone <global> on <rpool>. 
Population of boot environment zfsBE successful on <rpool>.
# lustatus
boot environment   Is        Active  Active     Can	    Copy 
Name               Complete  Now	   OnReboot   Delete	 Status 
------------------------------------------------------------------------ 
zfsBE               yes       yes     yes         no          - 
new-zfsBE           yes       no      no          yes         -
# zfs list
NAME                                      USED  AVAIL  REFER  MOUNTPOINT 
rpool                                    9.29G  57.6G    20K  /rpool 
rpool/ROOT                               5.38G  57.6G    18K  /rpool/ROOT 
rpool/ROOT/zfsBE                         5.38G  57.6G   551M  
rpool/ROOT/zfsBE@new-zfsBE               66.5K      -   551M  - 
rpool/ROOT/new-zfsBE                     85.5K  57.6G   551M  /tmp/.alt.103197 
rpool/dump                               1.95G      -  1.95G  - 
rpool/swap                               1.95G      -  1.95G  - 

You can now upgrade and activate the new boot environment. For an example of upgrading a ZFS boot environment, see Example 13–1. For more examples of using the luupgrade command, see Chapter 5, Upgrading With Solaris Live Upgrade (Tasks).


# luactivate new-zfsBE
**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a

3. Boot to the original boot environment by typing:

     boot

**********************************************************************

Modifying boot archive service
Activation of boot environment <new-zfsBE> successful.

Reboot the system to the ZFS boot environment.


# init 6
# svc.startd: The system is coming down.  Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

Creating a Boot Environment In a New Root Pool

If you have an existing ZFS root pool and want to create a new ZFS boot environment in a new root pool, the following procedure provides the steps. After the inactive boot environment is created, the new boot environment can be upgraded and activated at your convenience. The -p option is required to note where to place the new boot environment. The existing ZFS root pool must exist and be on a separate slice to be bootable and upgradeable.

ProcedureHow to Create a Boot Environment on a New ZFS Root Pool

  1. Before running Solaris Live Upgrade for the first time, you must install the latest Solaris Live Upgrade packages from installation media and install the patches listed in the SunSolve info doc 206844. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

    The latest packages and patches ensure that you have all the latest bug fixes and new features in the release. Ensure that you install all the patches that are relevant to your system before proceeding to create a new boot environment.

    The following substeps describe the steps in the SunSolve info doc 206844.


    Note –

    Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.


    1. Become superuser or assume an equivalent role.

    2. From the SunSolve web site, follow the instructions in info doc 206844 to remove and add Solaris Live Upgrade packages.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails. The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a release previous to Solaris 10 8/07, you do not need to remove this package.


      Note –

      The SUMWlucfg package is new starting with the Solaris 10 8/07 release. If you are using Solaris Live Upgrade packages from a previous release, you do not need to remove this package.



      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    3. Install the new Solaris Live Upgrade packages. For instructions, see  Installing Solaris Live Upgrade.

    4. Before running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the SunSolve web site, obtain the list of patches.

      • Change to the patch directory as in this example.


        # cd /var/tmp/lupatches
        
      • Install the patches with the patchadd command.


        # patchadd path-to-patches patch_id patch_id
        

        path-to-patches is the patch to the patch directory such as /var/tmp/lupatches. patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note –

        The patches need to be applied in the order that is specified in info doc 206844.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


        # init 6
        

        You now have the packages and patches necessary for a successful migration.

  2. Create a ZFS root pool.

    The ZFS root pool must be on a single slice to be bootable and upgradeable.


    # zpool create rpool2 c0t1d0s5
    
    rpool2

    Names of the new ZFS root pool.

    c0t1d0s5

    Specifies to place rpool2 on the bootable slice, c0t1d0s5.

    For information about creating a new root pool, see the Solaris ZFS Administration Guide.

  3. Create the new boot environment.


    # lucreate [-c zfsBE] -n new-zfsBE -p rpool2
    
    -c zfsBE

    Assigns the name zfsBE to the current ZFS boot environment.

    -n new-zfsBE

    Assigns the name to the boot environment to be created. The name must be unique on the system.

    -p rpool2

    Places the newly created ZFS root boot environment into the ZFS root pool defined in rpool2.

    The creation of the new ZFS boot environment might take a while. The file system data is being copied to the new ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  4. (Optional) Verify that the boot environment is complete.

    The lustatus command reports whether the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is        Active  Active     Can	    Copy 
    Name               Complete  Now	 OnReboot   Delete	 Status 
    ------------------------------------------------------------------------ 
    zfsBE                       yes      yes     yes        no        - 
    new-zfsBE                   yes      no      no         yes        -
  5. (Optional) Verify the basic dataset information on the system.

    The following example displays the names of all datasets on the system. The mount point listed for the new boot environment are temporary until the luactivate command is executed. The new boot environment shares the volumes, rpool2/dump and rpool2/swap, with the rpool2 ZFS boot environment.


    # zfs list
    NAME                             USED    AVAIL   REFER   MOUNTPOINT 
    rpool2                           9.29G    57.6G     20K   /rpool2 
    rpool2/ROOT/                     5.38G    57.6G     18K   /rpool2/ROOT 
    rpool2/ROOT/new-zfsBE            5.38G    57.6G    551M  /tmp/.new.luupdall.109859
    rpool2/dump                      3.99G        -   3.99G   - 
    rpool2/swap                      3.99G        -   3.99G   - 
    rpool                            9.29G    57.6G     20K   /.new.lulib.rs.109262
    rpool/ROOT                       5.46G    57.6G     18K   legacy
    rpool/ROOT/zfsBE                 5.46G    57.6G    551M  
    rpool/dump                       3.99G        -   3.99G   - 
    rpool/swap                       3.99G        -   3.99G   - 

    You can now upgrade and activate the new boot environment. See Example 13–3.


Example 13–3 Creating a Boot Environment on a New Root Pool

In this example, a new ZFS root pool, rpool, is created on a separate slice, c0t2s0s5. The lucreate command creates a new ZFS boot environment, new-zfsBE. The -p option is required, because the boot environment is being created in a different root pool.


# zpool create rpool C0t1d0s5
# zfs list
NAME                             USED    AVAIL   REFER   MOUNTPOINT 
rpool2                           9.29G    57.6G     20K   /rpool2 
rpool                            9.29G    57.6G     20K   /.new.lulib.rs.109262
rpool/ROOT					             5.46G    57.6G     18K   legacy
rpool/ROOT/zfsBE                 5.46G    57.6G    551M  
rpool/dump                       3.99G        -   3.99G   - 
rpool/swap                       3.99G        -   3.99G   - 

# lucreate -c rpool -n new-zfsBE -p rpool2
Analyzing system configuration.
Current boot environment is named <rpool>.
Creating initial configuration for primary boot environment <rpool>.
The device </dev/dsk/c0t0d0> is not a root device for any 
boot environment; cannot get BE ID.
PBE configuration successful: PBE name <rpool> PBE Boot 
Device </dev/dsk/rpool>.
Comparing source boot environment <rpool> file systems with 
the file system(s) you specified for the new boot environment. 
Determining which file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any 
boot environment; cannot get BE ID.
Creating configuration for boot environment <new-zfsBE>.
Source boot environment is <rpool>.
Creating boot environment <new-zfsBE>.
Creating file systems on boot environment <new-zfsBE>.
Creating <zfs> file system for </> in zone <global> on 
<rpool2/ROOT/new-zfsBE>.
Populating file systems on boot environment <new-zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </>.
Making boot environment <new-zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-cBc.mnt
updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive
Population of boot environment <new-zfsBE> successful.
Creation of boot environment <new-zfsBE> successful.

# lustatus
boot environment   Is        Active  Active     Can	    Copy 
Name               Complete  Now	OnReboot   Delete	 Status 
------------------------------------------------------------------------ 
zfsBE                yes      yes     yes        no        - 
new-zfsBE            yes      no      no         yes        -
# zfs list
NAME                             USED    AVAIL   REFER   MOUNTPOINT 
rpool2                           9.29G    57.6G     20K   /rpool2 
rpool2/ROOT/                     5.38G    57.6G     18K   /rpool2/ROOT 
rpool2/ROOT/new-zfsBE            5.38G    57.6G    551M   /tmp/.new.luupdall.109859
rpool2/dump                      3.99G        -   3.99G   - 
rpool2/swap                      3.99G        -   3.99G   - 
rpool                            9.29G    57.6G     20K   /.new.lulib.rs.109262
rpool/ROOT                       5.46G    57.6G     18K   legacy
rpool/ROOT/zfsBE                 5.46G    57.6G    551M  
rpool/dump                       3.99G        -   3.99G   - 
rpool/swap                       3.99G        -   3.99G   - 

Creating a Boot Environment From a Source Other Than the Currently Running System

If you have an existing ZFS root pool or UFS boot environment that is not currently used as the active boot environment, you can use the following example to create a new ZFS boot environment from this boot environment. After the new ZFS boot environment is created, this new boot environment can be upgraded and activated at your convenience.

If you are creating a boot environment from a source other than the currently running system, you must use the lucreate command with the -s option. The -s option works the same as for a UFS file system. The -s option provides the path to the alternate root (/) file system. This alternate root (/) file system is the source for the creation of the new ZFS root pool. The alternate root can be either a UFS (/) root file system or a ZFS root pool. The copy process might take time, depending on your system.

The following example shows how the -s option is used when creating a boot environment on another ZFS root pool.


Example 13–4 How to Create a Boot Environment From a Source Other Than the Currently Running System

The following command creates a new ZFS root pool from an existing ZFS root pool. The -n option assigns the name to the boot environment to be created, new-zfsBE. The -s option specifies the boot environment, rpool3, to be used as the source of the copy instead of the currently running boot environment. The -p option specifies to place the new boot environment in rpool2.


# lucreate -n new-zfsBE -s rpool3 -p rpool2
# lustatus
boot environment   Is        Active  Active     Can	    Copy 
Name               Complete  Now	 OnReboot   Delete	 Status 
------------------------------------------------------------------------ 
zfsBE               yes      yes     yes        no         - 
zfsBE2              yes      no      no         yes        -
zfsBE3              yes      no      no         yes        -
new-zfsBE           yes      no      no         yes        -

# zfs list
NAME                            USED    AVAIL   REFER   MOUNTPOINT 
rpool2                         9.29G    57.6G     20K   /rpool2 
rpool2/ROOT/                   5.38G    57.6G     18K   /rpool2/ROOT 
rpool2/ROOT/new-zfsBE          5.38G    57.6G    551M   /tmp/.new.luupdall.109859
rpool2/dump                    3.99G        -   3.99G   - 
rpool2/swap                    3.99G        -   3.99G   - 
rpool3                         9.29G    57.6G     20K   /rpool2 
rpool3/ROOT/                   5.38G    57.6G     18K   /rpool2/ROOT 
rpool3/ROOT/zfsBE3             5.38G    57.6G   551M    /tmp/.new.luupdall.109859
rpool3/dump                    3.99G        -   3.99G   - 
rpool3/swap                    3.99G        -   3.99G   - 
prool                          9.29G    57.6G     20K   /.new.lulib.rs.109262
rpool/ROOT                     5.46G    57.6G     18K   legacy
rpool/ROOT/zfsBE               5.46G    57.6G   551M  
rpool/dump                     3.99G        -   3.99G   - 
rpool/swap                     3.99G        -   3.99G   -

You can now upgrade and activate the new boot environment.


Falling Back to a ZFS Boot Environment

If a failure is detected after upgrading or if the application is not compatible with an upgraded component, you can fall back to the original boot environment with the luactivate command.

When you have migrated to a ZFS root pool from a UFS boot environment and you then decide to fall back to the UFS boot environment, you again need to import any ZFS storage pools that were created in the ZFS boot environment. These ZFS storage pools are not automatically available in the UFS boot environment. You will see messages similar to the following example when you switch back to the UFS boot environment.


# luactivate c0t0d0
WARNING: The following files have changed on both the current boot 
environment <new-ZFSbe> zone <global> and the boot environment 
to be activated <c0t0d0>: /etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current 
boot environment <ZFSbe> zone <global> and the boot environment to be 
activated <c0t0d0>. These files will not be automatically synchronized 
from the current boot environment <new-ZFSbe> when boot 
environment <c0t0d0>

For examples of falling back to the original boot environment, see Chapter 6, Failure Recovery: Falling Back to the Original Boot Environment (Tasks).

Additional Resources

For additional information about the topics included in this chapter, see the resources listed in Table 13–1.

Table 13–1 Additional Resources

Resource 

Location 

For ZFS information, including overview, planning, and step-by-step instructions 

Solaris ZFS Administration Guide

For using Solaris Live Upgrade on a system with UFS file systems 

Part I, Upgrading With Solaris Live Upgrade of this book

Chapter 14 Solaris Live Upgrade For ZFS With Non-Global Zones Installed

This chapter provides an overview and step-by-step procedures for migrating a UFS (/) root file system to a ZFS root pool.


Note –

Migrating from a UFS root (/) file system to a ZFS root pool or creating ZFS boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. When you perform a Solaris Live Upgrade for a UFS file system, both the command-line parameters and operation of Solaris Live Upgrade remain unchanged. To perform a Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.


Creating a ZFS Boot Environment on a System With Non-Global Zones Installed (Overview and Planning)

You can use Solaris Live Upgrade to migrate your UFS root (/) file system with non-global zones installed on a ZFS root pool. All non-global zones that are associated with the file system are also copied to the new boot environment. The following non-global zone migration scenarios are supported:

Pre-Migration Root File System and Zone Combination  

Post-Migration Root File System and Zone Combination 

UFS root file system with the non-global zone root directory in the UFS file system 

UFS root file system with the non-global zone root directory in a ZFS root pool 

 

ZFS root pool with the non-global zone root directory in the ZFS root pool 

 

ZFS root pool with the non-global zone root directory in a UFS file system 

UFS root file system with a non-global zone root in a ZFS root pool 

ZFS root pool with the non-global zone root in a ZFS root pool 

 

UFS root file system with the non-global zone root in ZFS root pool 

ZFS root pool with a non-global zone root directory in a ZFS root pool 

ZFS root pool with the non-global zone root directory in the ZFS root pool 

On a system with a UFS root (/) file system and non-global zones installed, the non-global zones are migrated if the zone is in a non-shared file system as part of the UFS to ZFS migration. Or, the zone is cloned when you upgrade within the same ZFS pool. If a non-global zone exists in a shared UFS file system, to migrate to another ZFS root pool, you must first upgrade the non-global zone, as in previous Solaris releases.

Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool (Tasks)

This chapter provides step-by-step instructions for migrating from a UFS root (/) file system to a ZFS root pool on a system with non-global zones installed. No non-global zones are on a shared file system in the UFS file system.

ProcedureHow to Migrate a UFS File System to a ZFS Root Pool on a System With Non-Global Zones

The lucreate command creates a boot environment of a ZFS root pool from a UFS root (/) file system. A ZFS root pool must exist before the lucreate operation and must be created with slices rather than whole disks to be upgradeable and bootable. This procedure shows how an existing non-global zone associated with the UFS root (/) file system is copied to the new boot environment in a ZFS root pool.

In the following example, the existing non-global zone, myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Solaris Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Solaris Live Upgrade operation. The ZFS-based non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.

  1. Complete the following steps the first time you perform a Solaris Live Upgrade.


    Note –

    Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.


    1. Remove existing Solaris Live Upgrade packages on your system if necessary. If you are upgrading to a new release, you must install the packages from that release.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.


      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    2. Install the new Solaris Live Upgrade packages from the release to which you are upgrading. For instructions, see  Installing Solaris Live Upgrade.

    3. Before installing or running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

      • Become superuser or assume an equivalent role.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the SunSolve web site, obtain the list of patches.

      • Change to the patch directory.


        # cd /var/tmp/lupatches
        
      • Install the patches with the patchadd command.


        # patchadd patch_id
        

        patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note –

        The patches need to be applied in the order that is specified in info doc 206844.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


        # init 6
        
  2. Create a ZFS root pool.

    The ZFS root pool must be on a single slice to be bootable and upgradeable.


    # zpool create rpool c3t0d0s0
    

    In this example, the name of the new ZFS to be created is rpool. The pool is created on a bootable slice, c3t0d0s0.

    For information about creating a new root pool, see the Solaris ZFS Administration Guide.

  3. Migrate your USF root (/) file system to the new ZFS root pool.


    # lucreate [-c ufsBE] -n new-zfsBE -p rpool
    
    -c ufsBE

    Assigns the name ufsBE to the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.

    -n new-zfsBE

    Assigns the name new-zfsBE to the boot environment to be created. The name must be unique on the system.

    -p rpool

    Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.

    All nonshared non-global zones are copied to the new boot environment along with critical file systems. The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  4. (Optional) Verify that the boot environment is complete.

    The lustatus command reports whether the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is         Active   Active     Can	    Copy 
    Name               Complete   Now	  OnReboot   Delete	 Status 
    ------------------------------------------------------------------------ 
    ufsBE               yes       yes      yes        no         - 
    new-zfsBE           yes       no       no        yes         -
  5. (Optional) Verify the basic dataset information on the system.

    The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.


    # zfs list
    NAME                        USED  AVAIL  REFER  MOUNTPOINT 
    rpool                      9.29G  57.6G    20K  /rpool
    rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
    rpool/ROOT/new-zfsBE       5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
    rpool/dump                 1.95G      -  1.95G  - 
    rpool/swap                 1.95G      -  1.95G  - 

    The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.


Example 14–1 Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool

In the following example, the existing non-global zone myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Solaris Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Solaris Live Upgrade operation. The ZFS-based, non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.


# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - myzone           installed  /zones/myzone                  native   shared
   - zzone            installed  /pool/zones                    native   shared

# zpool create mpool mirror c3t0d0s0 c4td0s0
# lucreate -c c1t2d0s0 -n zfs2BE -p mpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <c1t2d0s0>.
Creating initial configuration for primary boot environment <c1t2d0s0>.
The device </dev/dsk/c1t2d0s0> is not a root device for any 
boot environment; cannot get BE ID.
PBE configuration successful: PBE name <c1t2d0s0> PBE Boot Device 
</dev/dsk/c1t2d0s0>.
Comparing source boot environment <c1t2d0s0> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot
environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <c1t2d0s0>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-cBc.mnt
updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.

When the lucreate operation completes, use the lustatus command to view the boot environment status as in this example.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
c1t2d0s0                   yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - myzone           installed  /zones/myzone                  native   shared
   - zzone            installed  /pool/zones                    native   shared

Next, use the luactivate command to activate the new ZFS boot environment. For example:


# luactivate zfsBE
**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a

3. Boot to the original boot environment by typing:

     boot

**********************************************************************

Modifying boot archive service
Activation of boot environment <ZFSbe> successful.

Reboot the system to the ZFS BE.


# init 6
# svc.startd: The system is coming down.  Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

Confirm the new boot environment and the status of the migrated zones as in this example.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
c1t2d0s0                   yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

If you fall back to the UFS boot environment, then you again need to import any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following when you switch back to the UFS boot environment.


# luactivate c1t2d0s0
WARNING: The following files have changed on both the current boot 
environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>:
 /etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current 
boot environment <ZFSbe> zone <global> and the boot environment to be 
activated <c1t2d0s0>. These files will not be automatically synchronized 
from the current boot environment <ZFSbe> when boot environment <c1t2d0s0>

Additional Resources

For additional information about the topics included in this chapter, see the resources listed in Table 14–1.

Table 14–1 Additional Resources

Resource 

Location 

For information about non-global zones, including overview, planning, and step-by-step instructions 

System Administration Guide: Solaris Containers-Resource Management and Solaris Zones

For ZFS information, including overview, planning, and step-by-step instructions 

Solaris ZFS Administration Guide

For information about using Solaris Live Upgrade on a system with UFS file systems 

Part I, Upgrading With Solaris Live Upgrade of this book, including Chapter 8, Upgrading the Solaris OS on a System With Non-Global Zones Installed