Oracle Solaris ZFS Administration Guide

ProcedureHow to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

This procedure explains how to migrate a UFS root file system with zones installed to a ZFS root file system and ZFS zone root configuration that can be upgraded or patched.

In the steps that follow the example pool name is rpool, and the example name of the active boot environment is s10BE*.

  1. Upgrade the system to the Solaris 10 10/08 release if it is running a previous Solaris 10 release.

    For more information about upgrading a system is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  2. Create the root pool.


    # zpool create rpool mirror c0t1d0 c1t1d0
    

    For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.

  3. Confirm that the zones from the UFS environment are booted.

  4. Create the new ZFS boot environment.


    # lucreate -n s10BE2 -p rpool
    

    This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.

  5. Activate the new ZFS boot environment.


    # luactivate s10BE2
    

    Now, the system is running a ZFS root file system, but the zone roots on UFS are still in the UFS root file system. The next steps are required to fully migrate the UFS zones to a supported ZFS configuration.

  6. Reboot the system.


    # init 6
    
  7. Migrate the zones to a ZFS BE.

    1. Boot the zones.

    2. Create another ZFS BE within the pool.


      # lucreate s10BE3
      
    3. Activate the new boot environment.


      # luactivate s10BE3
      
    4. Reboot the system.


      # init 6
      

      This step verifies that the ZFS BE and the zones are booted.

  8. Resolve any potential mount-point problems.

    Due to a bug in Oracle Solaris Live Upgrade, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.

    1. Review the zfs list output.

      Look for incorrect temporary mount points. For example:


      # zfs list -r -o name,mountpoint rpool/ROOT/s10u6
      
      NAME                               MOUNTPOINT
      rpool/ROOT/s10u6                   /.alt.tmp.b-VP.mnt/
      rpool/ROOT/s10u6/zones             /.alt.tmp.b-VP.mnt//zones
      rpool/ROOT/s10u6/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

      The mount point for the root ZFS BE (rpool/ROOT/s10u6) should be /.

    2. Reset the mount points for the ZFS BE and its datasets.

      For example:


      # zfs inherit -r mountpoint rpool/ROOT/s10u6
      # zfs set mountpoint=/ rpool/ROOT/s10u6
      
    3. Reboot the system.

      When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.