You can use Oracle Solaris Live Upgrade to migrate a system with zones, but the supported configurations are limited in the Solaris 10 10/08 release. If you are installing or upgrading to at least the Solaris 10 5/09 release, more zone configurations are supported. For more information, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).
This section describes how to configure and install a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
If you are migrating a system with zones or if you are configuring a system with zones in the Solaris 10 10/08 release, review the following procedures:
How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)
Follow these recommended procedures to set up zones on a system with a ZFS root file system to ensure that you can use Oracle Solaris Live Upgrade on that system.
This procedure explains how to migrate a UFS root file system with zones installed to a ZFS root file system and ZFS zone root configuration that can be upgraded or patched.
In the steps that follow the example pool name is rpool, and the example name of the active boot environment is s10BE*.
Upgrade the system to the Solaris 10 10/08 release if it is running a previous Solaris 10 release.
For more information about upgrading a system is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
Create the root pool.
# zpool create rpool mirror c0t1d0 c1t1d0 |
For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
Confirm that the zones from the UFS environment are booted.
Create the new ZFS boot environment.
# lucreate -n s10BE2 -p rpool |
This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.
Activate the new ZFS boot environment.
# luactivate s10BE2 |
Now, the system is running a ZFS root file system, but the zone roots on UFS are still in the UFS root file system. The next steps are required to fully migrate the UFS zones to a supported ZFS configuration.
Reboot the system.
# init 6 |
Migrate the zones to a ZFS BE.
Resolve any potential mount-point problems.
Due to a bug in Oracle Solaris Live Upgrade, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.
Review the zfs list output.
Look for incorrect temporary mount points. For example:
# zfs list -r -o name,mountpoint rpool/ROOT/s10u6 NAME MOUNTPOINT rpool/ROOT/s10u6 /.alt.tmp.b-VP.mnt/ rpool/ROOT/s10u6/zones /.alt.tmp.b-VP.mnt//zones rpool/ROOT/s10u6/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA |
The mount point for the root ZFS BE (rpool/ROOT/s10u6) should be /.
Reset the mount points for the ZFS BE and its datasets.
For example:
# zfs inherit -r mountpoint rpool/ROOT/s10u6 # zfs set mountpoint=/ rpool/ROOT/s10u6 |
Reboot the system.
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.
This procedure explains how to set up a ZFS root file system and ZFS zone root configuration that can be upgraded or patched. In this configuration, the ZFS zone roots are created as ZFS datasets.
In the steps that follow the example pool name is rpool and the example name of the active boot environment is s10BE. The name for the zones dataset can be any legal dataset name. In the following example, the zones dataset name is zones.
Install the system with a ZFS root, either by using the Solaris interactive text installer or the Solaris JumpStart installation method.
For information about installing a ZFS root file system by using the initial installation method or the Solaris JumpStart method, see Installing a ZFS Root File System (Initial Installation) or Installing a ZFS Root File System (Oracle Solaris JumpStart Installation).
Boot the system from the newly created root pool.
Create a dataset for grouping the zone roots.
For example:
# zfs create -o canmount=noauto rpool/ROOT/s10BE/zones |
Setting the noauto value for the canmount property prevents the dataset from being mounted other than by the explicit action of Oracle Solaris Live Upgrade and system startup code.
Mount the newly created zones dataset.
# zfs mount rpool/ROOT/s10BE/zones |
The dataset is mounted at /zones.
Create and mount a dataset for each zone root.
# zfs create -o canmount=noauto rpool/ROOT/s10BE/zones/zonerootA # zfs mount rpool/ROOT/s10BE/zones/zonerootA |
Set the appropriate permissions on the zone root directory.
# chmod 700 /zones/zonerootA |
Configure the zone, setting the zone path as follows:
# zonecfg -z zoneA zoneA: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zoneA> create zonecfg:zoneA> set zonepath=/zones/zonerootA |
You can enable the zones to boot automatically when the system is booted by using the following syntax:
zonecfg:zoneA> set autoboot=true |
Install the zone.
# zoneadm -z zoneA install |
Boot the zone.
# zoneadm -z zoneA boot |
Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots on ZFS. These updates can either be a system upgrade or the application of patches.
In the steps that follow, newBE is the example name of the boot environment that is upgraded or patched.
Create the boot environment to upgrade or patch.
# lucreate -n newBE |
The existing boot environment, including all the zones, is cloned. A dataset is created for each dataset in the original boot environment. The new datasets are created in the same pool as the current root pool.
Select one of the following to upgrade the system or apply patches to the new boot environment:
Upgrade the system.
# luupgrade -u -n newBE -s /net/install/export/s10u7/latest |
where the -s option specifies the location of the Solaris installation medium.
Apply patches to the new boot environment.
# luupgrade -t -n newBE -t -s /patchdir 139147-02 157347-14 |
Activate the new boot environment.
# luactivate newBE |
Boot from the newly activated boot environment.
# init 6 |
Resolve any potential mount-point problems.
Due to a bug in Oracle Solaris Live Upgrade feature, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.
Review the zfs list output.
Look for incorrect temporary mount points. For example:
# zfs list -r -o name,mountpoint rpool/ROOT/newBE NAME MOUNTPOINT rpool/ROOT/newBE /.alt.tmp.b-VP.mnt/ rpool/ROOT/newBE/zones /.alt.tmp.b-VP.mnt/zones rpool/ROOT/newBE/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA |
The mount point for the root ZFS BE (rpool/ROOT/newBE) should be /.
Reset the mount points for the ZFS BE and its datasets.
For example:
# zfs inherit -r mountpoint rpool/ROOT/newBE # zfs set mountpoint=/ rpool/ROOT/newBE |
Reboot the system.
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.