1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Oracle Solaris ZFS and Traditional File System Differences
4. Managing Oracle Solaris ZFS Storage Pools
5. Installing and Booting an Oracle Solaris ZFS Root File System
Installing and Booting an Oracle Solaris ZFS Root File System (Overview)
Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support
Oracle Solaris Release Requirements
General ZFS Storage Pool Requirements
Disk Space Requirements for ZFS Storage Pools
ZFS Storage Pool Configuration Requirements
Installing a ZFS Root File System (Initial Installation)
How to Create a Mirrored Root Pool (Post Installation)
Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)
Installing a ZFS Root File System (Oracle Solaris JumpStart Installation)
JumpStart Profile Examples for ZFS
Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade)
ZFS Migration Issues With Oracle Solaris Live Upgrade
Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones)
Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)
How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)
How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)
How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)
ZFS Support for Swap and Dump Devices
Adjusting the Sizes of Your ZFS Swap Device and Dump Device
Troubleshooting ZFS Dump Device Issues
Booting From a ZFS Root File System
Booting From an Alternate Disk in a Mirrored ZFS Root Pool
SPARC: Booting From a ZFS Root File System
x86: Booting From a ZFS Root File System
Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)
How to Resolve ZFS Mount-Point Problems
Booting For Recovery Purposes in a ZFS Root Environment
How to Boot ZFS From Alternate Media
Recovering the ZFS Root Pool or Root Pool Snapshots
How to Replace a Disk in the ZFS Root Pool
How to Create Root Pool Snapshots
How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots
How to Roll Back Root Pool Snapshots From a Failsafe Boot
6. Managing Oracle Solaris ZFS File Systems
7. Working With Oracle Solaris ZFS Snapshots and Clones
8. Using ACLs to Protect Oracle Solaris ZFS Files
9. Oracle Solaris ZFS Delegated Administration
10. Oracle Solaris ZFS Advanced Topics
11. Oracle Solaris ZFS Troubleshooting and Pool Recovery
Oracle Solaris Live Upgrade features related to UFS components are still available, and they work as in previous Solaris releases.
The following features are also available:
When you migrate your UFS root file system to a ZFS root file system, you must designate an existing ZFS storage pool with the -p option.
If the UFS root file system has components on different slices, they are migrated to the ZFS root pool.
You can migrate a system with zones but the supported configurations are limited in the Solaris 10 10/08 release. More zone configurations are supported starting in the Solaris 10 5/09 release. For more information, see the following sections:
If you are migrating a system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
Oracle Solaris Live Upgrade can use the ZFS snapshot and clone features when you create a new ZFS BE in the same pool. So, BE creation is much faster than previous Solaris releases.
For detailed information about Oracle Solaris installation and Oracle Solaris Live Upgrade features, see the Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
The basic process for migrating a UFS root file system to a ZFS root file system follows:
Install the Solaris 10 10/08, Solaris 10 5/09, Solaris 10 10/09, or Oracle Solaris 10 9/10 release or use the standard upgrade program to upgrade from a previous Solaris 10 release on any supported SPARC based or x86 based system.
When you are running at least the Solaris 10 10/08 release, create a ZFS storage pool for your ZFS root file system.
Use Oracle Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system.
Activate your ZFS BE with the luactivate command.
For information about ZFS and Oracle Solaris Live Upgrade requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
Review the following issues before you use Oracle Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system:
The Oracle Solaris installation GUI's standard upgrade option is not available for migrating from a UFS to a ZFS root file system. To migrate from a UFS file system, you must use Oracle Solaris Live Upgrade.
You must create the ZFS storage pool that will be used for booting before the Oracle Solaris Live Upgrade operation. In addition, due to current boot limitations, the ZFS root pool must be created with slices instead of whole disks. For example:
# zpool create rpool mirror c1t0d0s0 c1t1d0s0
Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slices that are intended for the root pool.
You cannot use Oracle Solaris Live Upgrade to create a UFS BE from a ZFS BE. If you migrate your UFS BE to a ZFS BE and you retain your UFS BE, you can boot from either your UFS BE or your ZFS BE.
Do not rename your ZFS BEs with the zfs rename command because Oracle Solaris Live Upgrade feature cannot detect the name change. Subsequent commands, such as ludelete, will fail. In fact, do not rename your ZFS pools or file systems if you have existing BEs that you want to continue to use.
When creating an alternative BE that is a clone of the primary BE, you cannot use the -f, -x, -y, -Y, and -z options to include or exclude files from the primary BE. You can still use the inclusion and exclusion option set in the following cases:
UFS -> UFS UFS -> ZFS ZFS -> ZFS (different pool)
Although you can use Oracle Solaris Live Upgrade to upgrade your UFS root file system to a ZFS root file system, you cannot use Oracle Solaris Live Upgrade to upgrade non-root or shared file systems.
You cannot use the lu command to create or migrate a ZFS root file system.
The following examples show how to migrate a UFS root file system to a ZFS root file system.
If you are migrating or updating a system with zones, see the following sections:
Example 5-3 Using Oracle Solaris Live Upgrade to Migrate a UFS Root File System to a ZFS Root File System
The following example shows how to create a BE of a ZFS root file system from a UFS root file system. The current BE, ufsBE, which contains a UFS root file system, is identified by the -c option. If you do not include the optional -c option, the current BE name defaults to the device name. The new BE, zfsBE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation.
The ZFS storage pool must be created with slices rather than with whole disks to be upgradeable and bootable. Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slice that is intended for the root pool.
# zpool create rpool mirror c1t2d0s0 c2t1d0s0 # lucreate -c ufsBE -n zfsBE -p rpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <ufsBE>. Creating initial configuration for primary boot environment <ufsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </rpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-qD.mnt updating /.alt.tmp.b-qD.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful.
After the lucreate operation completes, use the lustatus command to view the BE status. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes -
Then, review the list of ZFS components. For example:
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.17G 59.8G 95.5K /rpool rpool/ROOT 4.66G 59.8G 21K /rpool/ROOT rpool/ROOT/zfsBE 4.66G 59.8G 4.66G / rpool/dump 2G 61.8G 16K - rpool/swap 517M 60.3G 16K -
Next, use the luactivate command to activate the new ZFS BE. For example:
# luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. ********************************************************************** The target boot environment has been activated. It will be used when you reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You MUST USE either the init or the shutdown command when you reboot. If you do not use either init or shutdown, the system will not boot using the target BE. ********************************************************************** . . . Modifying boot archive service Activation of boot environment <zfsBE> successful.
Next, reboot the system to the ZFS BE.
# init 6
Confirm that the ZFS BE is active.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes no no yes - zfsBE yes yes yes no -
If you switch back to the UFS BE, you must re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.
If the UFS BE is no longer required, you can remove it with the ludelete command.
Example 5-4 Using Oracle Solaris Live Upgrade to Create a ZFS BE From a ZFS BE
Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides on the same ZFS pool, the -p option is omitted.
If you have multiple ZFS BEs, do the following to select which BE to boot from:
SPARC: You can use the boot -L command to identify the available BEs and select a BE from which to boot by using the boot -Z command.
x86: You can select a BE from the GRUB menu.
For more information, see Example 5-9.
# lucreate -n zfs2BE Analyzing system configuration. No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <zfsBE>. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful.
Example 5-5 Upgrading Your ZFS BE (luupgrade)
You can upgrade your ZFS BE with additional packages or patches.
The basic process follows:
Create an alternate BE with the lucreate command.
Activate and boot from the alternate BE.
Upgrade your primary ZFS BE with the luupgrade command to add packages or patches.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes no no yes - zfs2BE yes yes yes no - # luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>. Mounting the BE <zfsBE>. Adding packages to the BE <zfsBE>. Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product> Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41 Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved. This appears to be an attempt to install the same architecture and version of a package which is already installed. This installation will attempt to overwrite this package. Using </a> as the package base directory. ## Processing package information. ## Processing system information. 4 package pathnames are already properly installed. ## Verifying package dependencies. ## Verifying disk space requirements. ## Checking for conflicts with packages already installed. ## Checking for setuid/setgid programs. This package contains scripts which will be executed with super-user permission during the process of installing this package. Do you want to continue with the installation of <SUNWchxge> [y,n,?] y Installing Chelsio N110 10GE NIC Driver as <SUNWchxge> ## Installing part 1 of 1. ## Executing postinstall script. Installation of <SUNWchxge> was successful. Unmounting the BE <zfsBE>. The package add to the BE <zfsBE> completed.
You can use Oracle Solaris Live Upgrade to migrate a system with zones, but the supported configurations are limited in the Solaris 10 10/08 release. If you are installing or upgrading to at least the Solaris 10 5/09 release, more zone configurations are supported. For more information, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).
This section describes how to configure and install a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
If you are migrating a system with zones or if you are configuring a system with zones in the Solaris 10 10/08 release, review the following procedures:
How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)
Follow these recommended procedures to set up zones on a system with a ZFS root file system to ensure that you can use Oracle Solaris Live Upgrade on that system.
This procedure explains how to migrate a UFS root file system with zones installed to a ZFS root file system and ZFS zone root configuration that can be upgraded or patched.
In the steps that follow the example pool name is rpool, and the example name of the active boot environment is s10BE*.
For more information about upgrading a system is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
# zpool create rpool mirror c0t1d0 c1t1d0
For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
# lucreate -n s10BE2 -p rpool
This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.
# luactivate s10BE2
Now, the system is running a ZFS root file system, but the zone roots on UFS are still in the UFS root file system. The next steps are required to fully migrate the UFS zones to a supported ZFS configuration.
# init 6
# lucreate s10BE3
# luactivate s10BE3
# init 6
This step verifies that the ZFS BE and the zones are booted.
Due to a bug in Oracle Solaris Live Upgrade, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.
Look for incorrect temporary mount points. For example:
# zfs list -r -o name,mountpoint rpool/ROOT/s10u6 NAME MOUNTPOINT rpool/ROOT/s10u6 /.alt.tmp.b-VP.mnt/ rpool/ROOT/s10u6/zones /.alt.tmp.b-VP.mnt//zones rpool/ROOT/s10u6/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA
The mount point for the root ZFS BE (rpool/ROOT/s10u6) should be /.
For example:
# zfs inherit -r mountpoint rpool/ROOT/s10u6 # zfs set mountpoint=/ rpool/ROOT/s10u6
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.
This procedure explains how to set up a ZFS root file system and ZFS zone root configuration that can be upgraded or patched. In this configuration, the ZFS zone roots are created as ZFS datasets.
In the steps that follow the example pool name is rpool and the example name of the active boot environment is s10BE. The name for the zones dataset can be any legal dataset name. In the following example, the zones dataset name is zones.
For information about installing a ZFS root file system by using the initial installation method or the Solaris JumpStart method, see Installing a ZFS Root File System (Initial Installation) or Installing a ZFS Root File System (Oracle Solaris JumpStart Installation).
For example:
# zfs create -o canmount=noauto rpool/ROOT/s10BE/zones
Setting the noauto value for the canmount property prevents the dataset from being mounted other than by the explicit action of Oracle Solaris Live Upgrade and system startup code.
# zfs mount rpool/ROOT/s10BE/zones
The dataset is mounted at /zones.
# zfs create -o canmount=noauto rpool/ROOT/s10BE/zones/zonerootA # zfs mount rpool/ROOT/s10BE/zones/zonerootA
# chmod 700 /zones/zonerootA
# zonecfg -z zoneA zoneA: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zoneA> create zonecfg:zoneA> set zonepath=/zones/zonerootA
You can enable the zones to boot automatically when the system is booted by using the following syntax:
zonecfg:zoneA> set autoboot=true
# zoneadm -z zoneA install
# zoneadm -z zoneA boot
Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots on ZFS. These updates can either be a system upgrade or the application of patches.
In the steps that follow, newBE is the example name of the boot environment that is upgraded or patched.
# lucreate -n newBE
The existing boot environment, including all the zones, is cloned. A dataset is created for each dataset in the original boot environment. The new datasets are created in the same pool as the current root pool.
Upgrade the system.
# luupgrade -u -n newBE -s /net/install/export/s10u7/latest
where the -s option specifies the location of the Solaris installation medium.
Apply patches to the new boot environment.
# luupgrade -t -n newBE -t -s /patchdir 139147-02 157347-14
# luactivate newBE
# init 6
Due to a bug in Oracle Solaris Live Upgrade feature, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.
Look for incorrect temporary mount points. For example:
# zfs list -r -o name,mountpoint rpool/ROOT/newBE NAME MOUNTPOINT rpool/ROOT/newBE /.alt.tmp.b-VP.mnt/ rpool/ROOT/newBE/zones /.alt.tmp.b-VP.mnt/zones rpool/ROOT/newBE/zones/zonerootA /.alt.tmp.b-VP.mnt/zones/zonerootA
The mount point for the root ZFS BE (rpool/ROOT/newBE) should be /.
For example:
# zfs inherit -r mountpoint rpool/ROOT/newBE # zfs set mountpoint=/ rpool/ROOT/newBE
When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.
You can use the Oracle Solaris Live Upgrade feature to migrate or upgrade a system with zones starting in the Solaris 10 10/08 release. Additional sparse—root and whole—root zone configurations are supported by Live Upgrade starting in the Solaris 10 5/09 release.
This section describes how to configure a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade starting in the Solaris 10 5/09 release. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).
Consider the following points when using Oracle Solaris Live Upgrade with ZFS and zones starting in at least the Solaris 10 5/09 release:
To use Oracle Solaris Live Upgrade with zone configurations that are supported starting in at least the Solaris 10 5/09 release, you must first upgrade your system to at least the Solaris 10 5/09 release by using the standard upgrade program.
Then, with Oracle Solaris Live Upgrade, you can either migrate your UFS root file system with zone roots to a ZFS root file system or you can upgrade or patch your ZFS root file system and zone roots.
You cannot directly migrate unsupported zone configurations from a previous Solaris 10 release to at least the Solaris 10 5/09 release.
If you are migrating or configuring a system with zones starting in the Solaris 10 5/09 release, review the following information:
Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)
How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)
How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)
Review the supported zone configurations before using Oracle Solaris Live Upgrade to migrate or upgrade a system with zones.
Migrate a UFS root file system to a ZFS root file system – The following configurations of zone roots are supported:
In a directory in the UFS root file system
In a subdirectory of a mount point in the UFS root file system
UFS root file system with a zone root in a UFS root file system directory or in a subdirectory of a UFS root file system mount point and a ZFS non-root pool with a zone root
The following UFS/zone configuration is not supported: UFS root file system that has a zone root as a mount point.
Migrate or upgrade a ZFS root file system – The following configurations of zone roots are supported:
In a dataset in the ZFS root pool. In some cases, if a dataset for the zone root is not provided before the Oracle Solaris Live Upgrade operation, a dataset for the zone root (zoneds) will be created by Oracle Solaris Live Upgrade.
In a subdirectory of the ZFS root file system
In a dataset outside of the ZFS root file system
In a subdirectory of a dataset outside of the ZFS root file system
In a dataset in a non root pool. In the following example, zonepool/zones is a dataset that contains the zone roots, and rpool contains the ZFS BE:
zonepool zonepool/zones zonepool/zones/myzone rpool rpool/ROOT rpool/ROOT/myBE
Oracle Solaris Live Upgrade snapshots and clones the zones in zonepool and the rpool BE if you use this syntax:
# lucreate -n newBE
The newBE BE in rpool/ROOT/newBE is created. When activated, newBE provides access to the zonepool components.
In the proceeding example, if /zonepool/zones was a subdirectory and not a separate dataset, then Live Upgrade would migrate it as components of the root pool, rpool.
Zones Migration or upgrade information with zones for both UFS and ZFS – Review the following considerations that might affect a migration or an upgrade of either a UFS and ZFS environment:
If you configured your zones as described in Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) in the Solaris 10 10/08 release and have upgraded to at least the Solaris 10 5/09, you should be able to migrate to a ZFS root file system or use Solaris Live Upgrade to upgrade to at least the Solaris 10 5/09 release.
Do not create zone roots in nested directories, for example, zones/zone1 and zones/zone1/zone2. Otherwise, mounting might fail at boot time.
Use this procedure after you have performed an initial installation of at least the Solaris 10 5/09 release to create a ZFS root file system. Also use this procedure after you have used the luupgrade feature to upgrade a ZFS root file system to at least the Solaris 10 5/09 release. A ZFS BE that is created using this procedure can then be upgraded or patched.
In the steps that follow, the example Oracle Solaris 10 9/10 system has a ZFS root file system and a zone root dataset in /rpool/zones. A ZFS BE named zfs2BE is created and can then be upgraded or patched.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.26G 59.7G 98K /rpool rpool/ROOT 4.64G 59.7G 21K legacy rpool/ROOT/zfsBE 4.64G 59.7G 4.64G / rpool/dump 1.00G 59.7G 1.00G - rpool/export 44K 59.7G 23K /export rpool/export/home 21K 59.7G 21K /export/home rpool/swap 1G 60.7G 16K - rpool/zones 633M 59.7G 633M /rpool/zones
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /rpool/zones native shared
# lucreate -n zfs2BE Analyzing system configuration. No name for current boot environment. INFORMATION: The current boot environment is not named - assigning name <zfsBE>. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes yes yes no - zfs2BE yes no no yes - # luactivate zfs2BE A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>. . . . # init 6
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.38G 59.6G 98K /rpool rpool/ROOT 4.72G 59.6G 21K legacy rpool/ROOT/zfs2BE 4.72G 59.6G 4.64G / rpool/ROOT/zfs2BE@zfs2BE 74.0M - 4.64G - rpool/ROOT/zfsBE 5.45M 59.6G 4.64G /.alt.zfsBE rpool/dump 1.00G 59.6G 1.00G - rpool/export 44K 59.6G 23K /export rpool/export/home 21K 59.6G 21K /export/home rpool/swap 1G 60.6G 16K - rpool/zones 17.2M 59.6G 633M /rpool/zones rpool/zones-zfsBE 653M 59.6G 633M /rpool/zones-zfsBE rpool/zones-zfsBE@zfs2BE 19.9M - 633M - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /rpool/zones native shared
Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots in at least the Solaris 10 5/09 release. These updates can either be a system upgrade or the application of patches.
In the steps that follow, zfs2BE, is the example name of the boot environment that is upgraded or patched.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 7.38G 59.6G 100K /rpool rpool/ROOT 4.72G 59.6G 21K legacy rpool/ROOT/zfs2BE 4.72G 59.6G 4.64G / rpool/ROOT/zfs2BE@zfs2BE 75.0M - 4.64G - rpool/ROOT/zfsBE 5.46M 59.6G 4.64G / rpool/dump 1.00G 59.6G 1.00G - rpool/export 44K 59.6G 23K /export rpool/export/home 21K 59.6G 21K /export/home rpool/swap 1G 60.6G 16K - rpool/zones 22.9M 59.6G 637M /rpool/zones rpool/zones-zfsBE 653M 59.6G 633M /rpool/zones-zfsBE rpool/zones-zfsBE@zfs2BE 20.0M - 633M -
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 5 zfszone running /rpool/zones native shared
# lucreate -n zfs2BE Analyzing system configuration. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Creating snapshot for <rpool/zones> on <rpool/zones@zfs10092BE>. Creating clone for <rpool/zones@zfs2BE> on <rpool/zones-zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful.
Upgrade the system.
# luupgrade -u -n zfs2BE -s /net/install/export/s10up/latest
where the -s option specifies the location of the Solaris installation medium.
This process can take a very long time.
For a complete example of the luupgrade process, see Example 5-6.
Apply patches to the new boot environment.
# luupgrade -t -n zfs2BE -t -s /patchdir patch-id-02 patch-id-04
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes yes yes no - zfs2BE yes no no yes - # luactivate zfs2BE A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>. . . .
# init 6
Example 5-6 Upgrading a ZFS Root File System With a Zone Root to a Oracle Solaris 10 9/10 ZFS Root File System
In this example, a ZFS BE (zfsBE), which was created on a Solaris 10 10/09 system with a ZFS root file system and zone root in a non root pool, is upgraded to the Oracle Solaris 10 9/10 release. This process can take a long time. Then, the upgraded BE (zfs2BE) is activated. Ensure that the zones are installed and booted before attempting the upgrade.
In this example, the zonepool pool, the /zonepool/zones dataset, and the zfszone zone are created as follows:
# zpool create zonepool mirror c2t1d0 c2t5d0 # zfs create zonepool/zones # chmod 700 zonepool/zones # zonecfg -z zfszone zfszone: No such zone configured Use 'create' to begin configuring a new zone. zonecfg:zfszone> create zonecfg:zfszone> set zonepath=/zonepool/zones zonecfg:zfszone> verify zonecfg:zfszone> exit # zoneadm -z zfszone install cannot create ZFS dataset zonepool/zones: dataset already exists Preparing to install zone <zfszone>. Creating list of files to copy from the global zone. Copying <8960> files to the zone. . . .
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /zonepool/zones native shared # lucreate -n zfsBE . . . # luupgrade -u -n zfsBE -s /net/install/export/s10up/latest 40410 blocks miniroot filesystem is <lofs> Mounting miniroot at </net/system/export/s10up/latest/Solaris_10/Tools/Boot> Validating the contents of the media </net/system/export/s10up/latest>. The media is a standard Solaris media. The media contains an operating system upgrade image. The media contains <Solaris> version <10>. Constructing upgrade profile to use. Locating the operating system upgrade program. Checking for existence of previously scheduled Live Upgrade requests. Creating upgrade profile for BE <zfsBE>. Determining packages to install or upgrade for BE <zfsBE>. Performing the operating system upgrade of the BE <zfsBE>. CAUTION: Interrupting this process may leave the boot environment unstable or unbootable. Upgrading Solaris: 100% completed Installation of the packages from this media is complete. Updating package information on boot environment <zfsBE>. Package information successfully updated on boot environment <zfsBE>. Adding operating system patches to the BE <zfsBE>. The operating system patch installation is complete. INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot environment <zfsBE> contains a log of the upgrade operation. INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot environment <zfsBE> contains a log of cleanup operations required. INFORMATION: Review the files listed above. Remember that all of the files are located on boot environment <zfsBE>. Before you activate boot environment <zfsBE>, determine if any additional system maintenance is required or if additional media of the software distribution must be installed. The Solaris upgrade of the boot environment <zfsBE> is complete. Installing failsafe Failsafe install is complete. # luactivate zfsBE # init 6 # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes no no yes - zfs2BE yes yes yes no - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /zonepool/zones native shared
Use this procedure to migrate a system with a UFS root file system and a zone root to at least the Solaris 10 5/09 release. Then, use Oracle Solaris Live Upgrade to create a ZFS BE.
In the steps that follow, the example UFS BE name is c0t1d0s0, the UFS zone root is zonepool/zfszone, and the ZFS root BE is zfsBE.
For information about upgrading a system that is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.
For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 zfszone running /zonepool/zones native shared
# lucreate -c c1t1d0s0 -n zfsBE -p rpool
This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- c1t1d0s0 yes no no yes - zfsBE yes yes yes no - # luactivate zfsBE A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>. . . .
# init 6
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 6.17G 60.8G 98K /rpool rpool/ROOT 4.67G 60.8G 21K /rpool/ROOT rpool/ROOT/zfsBE 4.67G 60.8G 4.67G / rpool/dump 1.00G 60.8G 1.00G - rpool/swap 517M 61.3G 16K - zonepool 634M 7.62G 24K /zonepool zonepool/zones 270K 7.62G 633M /zonepool/zones zonepool/zones-c1t1d0s0 634M 7.62G 633M /zonepool/zones-c1t1d0s0 zonepool/zones-c1t1d0s0@zfsBE 262K - 633M - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - zfszone installed /zonepool/zones native shared
Example 5-7 Migrating a UFS Root File System With a Zone Root to a ZFS Root File System
In this example, a Oracle Solaris 10 9/10 system with a UFS root file system and a zone root (/uzone/ufszone), as well as a ZFS non-root pool (pool) and a zone root (/pool/zfszone), is migrated to a ZFS root file system. Ensure that the ZFS root pool is created and that the zones are installed and booted before attempting the migration.
# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared 2 ufszone running /uzone/ufszone native shared 3 zfszone running /pool/zones/zfszone native shared
# lucreate -c ufsBE -n zfsBE -p rpool Analyzing system configuration. No name for current boot environment. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <ufsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID. Creating configuration for boot environment <zfsBE>. Source boot environment is <ufsBE>. Creating boot environment <zfsBE>. Creating file systems on boot environment <zfsBE>. Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>. Populating file systems on boot environment <zfsBE>. Checking selection integrity. Integrity check OK. Populating contents of mount point </>. Copying. Creating shared file system mount points. Copying root of zone <ufszone> to </.alt.tmp.b-EYd.mnt/uzone/ufszone>. Creating snapshot for <pool/zones/zfszone> on <pool/zones/zfszone@zfsBE>. Creating clone for <pool/zones/zfszone@zfsBE> on <pool/zones/zfszone-zfsBE>. Creating compare databases for boot environment <zfsBE>. Creating compare database for file system </rpool/ROOT>. Creating compare database for file system </>. Updating compare databases on boot environment <zfsBE>. Making boot environment <zfsBE> bootable. Creating boot_archive for /.alt.tmp.b-DLd.mnt updating /.alt.tmp.b-DLd.mnt/platform/sun4u/boot_archive Population of boot environment <zfsBE> successful. Creation of boot environment <zfsBE> successful. # lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- ufsBE yes yes yes no - zfsBE yes no no yes - # luactivate zfsBE . . . # init 6 . . . # zfs list NAME USED AVAIL REFER MOUNTPOINT pool 628M 66.3G 19K /pool pool/zones 628M 66.3G 20K /pool/zones pool/zones/zfszone 75.5K 66.3G 627M /pool/zones/zfszone pool/zones/zfszone-ufsBE 628M 66.3G 627M /pool/zones/zfszone-ufsBE pool/zones/zfszone-ufsBE@zfsBE 98K - 627M - rpool 7.76G 59.2G 95K /rpool rpool/ROOT 5.25G 59.2G 18K /rpool/ROOT rpool/ROOT/zfsBE 5.25G 59.2G 5.25G / rpool/dump 2.00G 59.2G 2.00G - rpool/swap 517M 59.7G 16K - # zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / native shared - ufszone installed /uzone/ufszone native shared - zfszone installed /pool/zones/zfszone native shared