Solaris 10 10/08 Installation Guide: Solaris Live Upgrade and Upgrade Planning

Chapter 14 Solaris Live Upgrade For ZFS With Non-Global Zones Installed

This chapter provides an overview and step-by-step procedures for migrating a UFS (/) root file system to a ZFS root pool.


Note –

Migrating from a UFS root (/) file system to a ZFS root pool or creating ZFS boot environments with Solaris Live Upgrade is new in the Solaris 10 10/08 release. When you perform a Solaris Live Upgrade for a UFS file system, both the command-line parameters and operation of Solaris Live Upgrade remain unchanged. To perform a Solaris Live Upgrade on a system with UFS file systems, see Part I, Upgrading With Solaris Live Upgrade of this book.


Creating a ZFS Boot Environment on a System With Non-Global Zones Installed (Overview and Planning)

You can use Solaris Live Upgrade to migrate your UFS root (/) file system with non-global zones installed on a ZFS root pool. All non-global zones that are associated with the file system are also copied to the new boot environment. The following non-global zone migration scenarios are supported:

Pre-Migration Root File System and Zone Combination  

Post-Migration Root File System and Zone Combination 

UFS root file system with the non-global zone root directory in the UFS file system 

UFS root file system with the non-global zone root directory in a ZFS root pool 

 

ZFS root pool with the non-global zone root directory in the ZFS root pool 

 

ZFS root pool with the non-global zone root directory in a UFS file system 

UFS root file system with a non-global zone root in a ZFS root pool 

ZFS root pool with the non-global zone root in a ZFS root pool 

 

UFS root file system with the non-global zone root in ZFS root pool 

ZFS root pool with a non-global zone root directory in a ZFS root pool 

ZFS root pool with the non-global zone root directory in the ZFS root pool 

On a system with a UFS root (/) file system and non-global zones installed, the non-global zones are migrated if the zone is in a non-shared file system as part of the UFS to ZFS migration. Or, the zone is cloned when you upgrade within the same ZFS pool. If a non-global zone exists in a shared UFS file system, to migrate to another ZFS root pool, you must first upgrade the non-global zone, as in previous Solaris releases.

Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool (Tasks)

This chapter provides step-by-step instructions for migrating from a UFS root (/) file system to a ZFS root pool on a system with non-global zones installed. No non-global zones are on a shared file system in the UFS file system.

ProcedureHow to Migrate a UFS File System to a ZFS Root Pool on a System With Non-Global Zones

The lucreate command creates a boot environment of a ZFS root pool from a UFS root (/) file system. A ZFS root pool must exist before the lucreate operation and must be created with slices rather than whole disks to be upgradeable and bootable. This procedure shows how an existing non-global zone associated with the UFS root (/) file system is copied to the new boot environment in a ZFS root pool.

In the following example, the existing non-global zone, myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Solaris Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Solaris Live Upgrade operation. The ZFS-based non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.

  1. Complete the following steps the first time you perform a Solaris Live Upgrade.


    Note –

    Using Solaris Live Upgrade to create new ZFS boot environments requires at least the Solaris 10 10/08 release to be installed. Previous releases do not have the ZFS and Solaris Live Upgrade software to perform the tasks.


    1. Remove existing Solaris Live Upgrade packages on your system if necessary. If you are upgrading to a new release, you must install the packages from that release.

      The three Solaris Live Upgrade packages, SUNWluu, SUNWlur, and SUNWlucfg, comprise the software needed to upgrade by using Solaris Live Upgrade. These packages include existing software, new features, and bug fixes. If you do not remove the existing packages and install the new packages on your system before using Solaris Live Upgrade, upgrading to the target release fails.


      # pkgrm SUNWlucfg SUNWluu SUNWlur
      
    2. Install the new Solaris Live Upgrade packages from the release to which you are upgrading. For instructions, see  Installing Solaris Live Upgrade.

    3. Before installing or running Solaris Live Upgrade, you are required to install the following patches. These patches ensure that you have all the latest bug fixes and new features in the release.

      Ensure that you have the most recently updated patch list by consulting SunSolve. Search for the info doc 206844 (formerly 72099) on the SunSolve web site.

      • Become superuser or assume an equivalent role.

      • If you are storing the patches on a local disk, create a directory such as /var/tmp/lupatches and download the patches to that directory.

      • From the SunSolve web site, obtain the list of patches.

      • Change to the patch directory.


        # cd /var/tmp/lupatches
        
      • Install the patches with the patchadd command.


        # patchadd patch_id
        

        patch_id is the patch number or numbers. Separate multiple patch names with a space.


        Note –

        The patches need to be applied in the order that is specified in info doc 206844.


      • Reboot the system if necessary. Certain patches require a reboot to be effective.

        x86 only: Rebooting the system is required or Solaris Live Upgrade fails.


        # init 6
        
  2. Create a ZFS root pool.

    The ZFS root pool must be on a single slice to be bootable and upgradeable.


    # zpool create rpool c3t0d0s0
    

    In this example, the name of the new ZFS to be created is rpool. The pool is created on a bootable slice, c3t0d0s0.

    For information about creating a new root pool, see the Solaris ZFS Administration Guide.

  3. Migrate your USF root (/) file system to the new ZFS root pool.


    # lucreate [-c ufsBE] -n new-zfsBE -p rpool
    
    -c ufsBE

    Assigns the name ufsBE to the current UFS boot environment. This option is not required and is used only when the first boot environment is created. If you run the lucreate command for the first time and you omit the -c option, the software creates a default name for you.

    -n new-zfsBE

    Assigns the name new-zfsBE to the boot environment to be created. The name must be unique on the system.

    -p rpool

    Places the newly created ZFS root (/) file system into the ZFS root pool defined in rpool.

    All nonshared non-global zones are copied to the new boot environment along with critical file systems. The creation of the new ZFS boot environment might take a while. The UFS file system data is being copied to the ZFS root pool. When the inactive boot environment has been created, you can use the luupgrade or luactivate command to upgrade or activate the new ZFS boot environment.

  4. (Optional) Verify that the boot environment is complete.

    The lustatus command reports whether the boot environment creation is complete and bootable.


    # lustatus
    boot environment   Is         Active   Active     Can	    Copy 
    Name               Complete   Now	  OnReboot   Delete	 Status 
    ------------------------------------------------------------------------ 
    ufsBE               yes       yes      yes        no         - 
    new-zfsBE           yes       no       no        yes         -
  5. (Optional) Verify the basic dataset information on the system.

    The list command displays the names of all datasets on the system. In this example, rpool is the name of the ZFS pool and new-zfsBE is the name of the newly created ZFS boot environment.


    # zfs list
    NAME                        USED  AVAIL  REFER  MOUNTPOINT 
    rpool                      9.29G  57.6G    20K  /rpool
    rpool/ROOT                 5.38G  57.6G    18K  /rpool/ROOT
    rpool/ROOT/new-zfsBE       5.38G  57.6G   551M  /tmp/.alt.luupdall.110034
    rpool/dump                 1.95G      -  1.95G  - 
    rpool/swap                 1.95G      -  1.95G  - 

    The mount points listed for the new boot environment are temporary until the luactivate command is executed. The /dump and /swap volumes are not shared with the original UFS boot environment, but are shared within the ZFS root pool and boot environments within the root pool.


Example 14–1 Migrating From a UFS root (/) File System With Non-Global Zones Installed to ZFS Root Pool

In the following example, the existing non-global zone myzone, has its non-global zone root in a UFS root (/) file system. The zone zzone has its zone root in a ZFS file system in the existing ZFS storage pool, pool. Solaris Live Upgrade is used to migrate the UFS boot environment, c2t2d0s0, to a ZFS boot environment, zfs2BE. The UFS-based myzone zone migrates to a new ZFS storage pool, mpool, that is created before the Solaris Live Upgrade operation. The ZFS-based, non-global zone, zzone, is cloned but retained in the ZFS pool pool and migrated to the new zfs2BE boot environment.


# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - myzone           installed  /zones/myzone                  native   shared
   - zzone            installed  /pool/zones                    native   shared

# zpool create mpool mirror c3t0d0s0 c4td0s0
# lucreate -c c1t2d0s0 -n zfs2BE -p mpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <c1t2d0s0>.
Creating initial configuration for primary boot environment <c1t2d0s0>.
The device </dev/dsk/c1t2d0s0> is not a root device for any 
boot environment; cannot get BE ID.
PBE configuration successful: PBE name <c1t2d0s0> PBE Boot Device 
</dev/dsk/c1t2d0s0>.
Comparing source boot environment <c1t2d0s0> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot
environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <c1t2d0s0>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-cBc.mnt
updating /.alt.tmp.b-cBc.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.

When the lucreate operation completes, use the lustatus command to view the boot environment status as in this example.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
c1t2d0s0                   yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

# zoneadm list -iv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - myzone           installed  /zones/myzone                  native   shared
   - zzone            installed  /pool/zones                    native   shared

Next, use the luactivate command to activate the new ZFS boot environment. For example:


# luactivate zfsBE
**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a

3. Boot to the original boot environment by typing:

     boot

**********************************************************************

Modifying boot archive service
Activation of boot environment <ZFSbe> successful.

Reboot the system to the ZFS BE.


# init 6
# svc.startd: The system is coming down.  Please wait.
svc.startd: 79 system services are now being stopped.
.
.
.

Confirm the new boot environment and the status of the migrated zones as in this example.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
c1t2d0s0                   yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

If you fall back to the UFS boot environment, then you again need to import any ZFS storage pools that were created in the ZFS boot environment because they are not automatically available in the UFS boot environment. You will see messages similar to the following when you switch back to the UFS boot environment.


# luactivate c1t2d0s0
WARNING: The following files have changed on both the current boot 
environment <ZFSbe> zone <global> and the boot environment to be activated <c1t2d0s0>:
 /etc/zfs/zpool.cache
INFORMATION: The files listed above are in conflict between the current 
boot environment <ZFSbe> zone <global> and the boot environment to be 
activated <c1t2d0s0>. These files will not be automatically synchronized 
from the current boot environment <ZFSbe> when boot environment <c1t2d0s0>

Additional Resources

For additional information about the topics included in this chapter, see the resources listed in Table 14–1.

Table 14–1 Additional Resources

Resource 

Location 

For information about non-global zones, including overview, planning, and step-by-step instructions 

System Administration Guide: Solaris Containers-Resource Management and Solaris Zones

For ZFS information, including overview, planning, and step-by-step instructions 

Solaris ZFS Administration Guide

For information about using Solaris Live Upgrade on a system with UFS file systems 

Part I, Upgrading With Solaris Live Upgrade of this book, including Chapter 8, Upgrading the Solaris OS on a System With Non-Global Zones Installed