JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris ZFS Administration Guide     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Managing Oracle Solaris ZFS Storage Pools

4.  Installing and Booting an Oracle Solaris ZFS Root File System

Installing and Booting an Oracle Solaris ZFS Root File System (Overview)

ZFS Installation Features

Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support

Oracle Solaris Release Requirements

General ZFS Root Pool Requirements

Disk Space Requirements for ZFS Root Pools

ZFS Root Pool Configuration Requirements

Installing a ZFS Root File System (Oracle Solaris Initial Installation)

How to Create a Mirrored ZFS Root Pool (Postinstallation)

Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)

Installing a ZFS Root File System ( JumpStart Installation)

JumpStart Keywords for ZFS

JumpStart Profile Examples for ZFS

JumpStart Issues for ZFS

Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)

ZFS Migration Issues With Live Upgrade

Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)

Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

How to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

How to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

Managing Your ZFS Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap Device and Dump Device

Customizing ZFS Swap and Dump Volumes

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

SPARC: Booting From a ZFS Root File System

x86: Booting From a ZFS Root File System

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

How to Resolve ZFS Mount-Point Problems

Booting for Recovery Purposes in a ZFS Root Environment

How to Boot ZFS Failsafe Mode

How to Boot ZFS From Alternate Media

Recovering the ZFS Root Pool or Root Pool Snapshots

How to Replace a Disk in the ZFS Root Pool

How to Create Root Pool Snapshots

How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots

How to Roll Back Root Pool Snapshots From a Failsafe Boot

5.  Managing Oracle Solaris ZFS File Systems

6.  Working With Oracle Solaris ZFS Snapshots and Clones

7.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

8.  Oracle Solaris ZFS Delegated Administration

9.  Oracle Solaris ZFS Advanced Topics

10.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

11.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)

Live Upgrade features related to UFS components are still available, and they work as in previous releases.

The following features are available:

For detailed information about Oracle Solaris installation and Live Upgrade features, see the Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.

For information about ZFS and Live Upgrade requirements, see Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support.

ZFS Migration Issues With Live Upgrade

Review the following issues before you use Live Upgrade to migrate your UFS root file system to a ZFS root file system:

Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)

The following examples show how to migrate a UFS root file system to a ZFS root file system and how to update a ZFS root file system.

If you are migrating or updating a system with zones, see the following sections:

Example 4-4 Using Live Upgrade to Migrate a UFS Root File System to a ZFS Root File System

The following example shows how to migrate a ZFS root file system from a UFS root file system. The current BE, ufsBE, which contains a UFS root file system, is identified by the -c option. If you do not include the optional -c option, the current BE name defaults to the device name. The new BE, zfsBE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation is performed.

The ZFS storage pool must be created with slices rather than with whole disks to be upgradeable and bootable. Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slice that is intended for the root pool.

# zpool create rpool mirror c1t2d0s0 c2t1d0s0
# lucreate -c ufsBE -n zfsBE -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsBE>.
Creating initial configuration for primary boot environment <ufsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-qD.mnt
updating /.alt.tmp.b-qD.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.

After the lucreate operation completes, use the lustatus command to view the BE status. For example:

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

Then, review the list of ZFS components. For example:

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 7.17G  59.8G  95.5K  /rpool
rpool/ROOT            4.66G  59.8G    21K  /rpool/ROOT
rpool/ROOT/zfsBE      4.66G  59.8G  4.66G  /
rpool/dump               2G  61.8G    16K  -
rpool/swap             517M  60.3G    16K  -

Next, use the luactivate command to activate the new ZFS BE. For example:

# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
.
.
.
Modifying boot archive service
Activation of boot environment <zfsBE> successful.

Next, reboot the system to the ZFS BE.

# init 6

Confirm that the ZFS BE is active.

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      yes    yes       no     -      

If you switch back to the UFS BE, you must re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.

If the UFS BE is no longer required, you can remove it with the ludelete command.

Example 4-5 Using Live Upgrade to Create a ZFS BE From a UFS BE (With a Separate /var)

In the Oracle Solaris 10 8/11 release, you can use the lucreate -D option to identify that you want a separate /var file system created when you migrate a UFS root file system to a ZFS root file system. In the following example, the existing UFS BE is migrated to a ZFS BE with a separate /var file system.

# lucreate -n zfsBE -p rpool -D /var
Determining types of file systems supported
Validating file system requests
Preparing logical storage devices
Preparing physical storage devices
Configuring physical storage devices
Configuring logical storage devices
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <c0t0d0s0>.
Current boot environment is named <c0t0d0s0>.
Creating initial configuration for primary boot environment <c0t0d0s0>.
INFORMATION: No BEs are configured on this system.
The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <c0t0d0s0> PBE Boot Device </dev/dsk/c0t0d0s0>.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <c0t0d0s0>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Creating <zfs> file system for </var> in zone <global> on <rpool/ROOT/zfsBE/var>.
Populating file systems on boot environment <zfsBE>.
Analyzing zones.
Mounting ABE <zfsBE>.
Generating file list.
Copying data from PBE <c0t0d0s0> to ABE <zfsBE>
100% of filenames transferred
Finalizing ABE.
Fixing zonepaths in ABE.
Unmounting ABE <zfsBE>.
Fixing properties on ZFS datasets in ABE.
Reverting state of zones in PBE <c0t0d0s0>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-iaf.mnt
updating /.alt.tmp.b-iaf.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
.
.
.
Modifying boot archive service
Activation of boot environment <zfsBE> successful.
# init 6

Review the newly created ZFS file systems. For example:

# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 6.29G  26.9G  32.5K  /rpool
rpool/ROOT            4.76G  26.9G    31K  legacy
rpool/ROOT/zfsBE      4.76G  26.9G  4.67G  /
rpool/ROOT/zfsBE/var  89.5M  26.9G  89.5M  /var
rpool/dump             512M  26.9G   512M  -
rpool/swap            1.03G  28.0G    16K  -

Example 4-6 Using Live Upgrade to Create a ZFS BE From a ZFS BE

Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides in the same ZFS pool, the -p option is omitted.

If you have multiple ZFS BEs, do the following to select which BE to boot from:

For more information, see Example 4-12.

# lucreate -n zfs2BE
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <zfsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfs2BE>.
Source boot environment is <zfsBE>.
Creating boot environment <zfs2BE>.
Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
Population of boot environment <zfs2BE> successful.
Creation of boot environment <zfs2BE> successful.

Example 4-7 Update Your ZFS BE (luupgrade)

You can update your ZFS BE with additional packages or patches.

The basic process follows:

# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -   
# luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge
Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>.
Mounting the BE <zfsBE>.
Adding packages to the BE <zfsBE>.

Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product>

Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.

This appears to be an attempt to install the same architecture and
version of a package which is already installed.  This installation
will attempt to overwrite this package.

Using </a> as the package base directory.
## Processing package information.
## Processing system information.
   4 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWchxge> [y,n,?] y
Installing Chelsio N110 10GE NIC Driver as <SUNWchxge>

## Installing part 1 of 1.
## Executing postinstall script.

Installation of <SUNWchxge> was successful.
Unmounting the BE <zfsBE>.
The package add to the BE <zfsBE> completed.

Or, you can create a new BE to update to a later Oracle Solaris release. For example:

# luupgrade -u -n newBE -s /net/install/export/s10up/latest

where the -s option specifies the location of the Solaris installation medium.

Example 4-8 Creating a ZFS BE With a ZFS Flash Archive (luupgrade)

In the Oracle Solaris 10 8/11 release, you can use the luupgrade command to create a ZFS BE from an existing ZFS flash archive. The basic process is as follows:

  1. Create a flash archive of a master system with a ZFS BE.

    For example:

    master-system# flarcreate -n s10zfsBE /tank/data/s10zfsflar
    Full Flash
    Checking integrity...
    Integrity OK.
    Running precreation scripts...
    Precreation scripts done.
    Determining the size of the archive...
    The archive will be approximately 4.67GB.
    Creating the archive...
    Archive creation complete.
    Running postcreation scripts...
    Postcreation scripts done.
    
    Running pre-exit scripts...
    Pre-exit scripts done.
  2. Make the ZFS flash archive that was created on the master system available to the clone system.

    Possible flash archive locations are a local file system, HTTP, FTP, NFS, and so on.

  3. Create an empty alternate ZFS BE on the clone system.

    Use the -s - option to specify that this is an empty BE to be populated with the ZFS flash archive contents.

    For example:

    clone-system# lucreate -n zfsflashBE -s - -p rpool
    Determining types of file systems supported
    Validating file system requests
    Preparing logical storage devices
    Preparing physical storage devices
    Configuring physical storage devices
    Configuring logical storage devices
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <s10zfsBE>.
    Current boot environment is named <s10zfsBE>.
    Creating initial configuration for primary boot environment <s10zfsBE>.
    INFORMATION: No BEs are configured on this system.
    The device </dev/dsk/c0t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <s10zfsBE> PBE Boot Device </dev/dsk/c0t0d0s0>.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    The device </dev/dsk/c0t1d0s0> is not a root device for any boot environment; cannot get BE ID.
    Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsflashBE>.
    Creation of boot environment <zfsflashBE> successful.
  4. Install the ZFS flash archive into the alternate BE.

    For example:

    clone-system# luupgrade -f -s /net/server/export/s10/latest -n zfsflashBE -a /tank/data/zfs10up2flar
    miniroot filesystem is <lofs>
    Mounting miniroot at </net/server/s10up/latest/Solaris_10/Tools/Boot>
    Validating the contents of the media </net/server/export/s10up/latest>.
    The media is a standard Solaris media.
    Validating the contents of the miniroot </net/server/export/s10up/latest/Solaris_10/Tools/Boot>.
    Locating the flash install program.
    Checking for existence of previously scheduled Live Upgrade requests.
    Constructing flash profile to use.
    Creating flash profile for BE <zfsflashBE>.
    Performing the operating system flash install of the BE <zfsflashBE>.
    CAUTION: Interrupting this process may leave the boot environment unstable or unbootable.
    Extracting Flash Archive: 100% completed (of 5020.86 megabytes)            
    The operating system flash install completed.
    updating /.alt.tmp.b-rgb.mnt/platform/sun4u/boot_archive
    
    The Live Flash Install of the boot environment <zfsflashBE> is complete.
  5. Activate the alternate BE.

    clone-system# luactivate zfsflashBE
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfsflashBE>.
    .
    .
    .
    Modifying boot archive service
    Activation of boot environment <zfsflashBE> successful.
  6. Reboot the system.

    clone-system# init 6

Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

You can use Live Upgrade to migrate a system with zones, but the supported configurations are limited in the Solaris 10 10/08 release. If you are installing or upgrading to at least the Solaris 10 5/09 release, more zone configurations are supported. For more information, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).

This section describes how to install and configure a system with zones so that it can be upgraded and patched with Live Upgrade. If you are migrating to a ZFS root file system without zones, see Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones).

If you are migrating a system with zones or if you are configuring a system with zones in the Solaris 10 10/08 release, review the following procedures:

Follow these recommended procedures to set up zones on a system with a ZFS root file system to ensure that you can use Live Upgrade on that system.

How to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

This procedure explains how to migrate a UFS root file system with zones installed to a ZFS root file system and ZFS zone root configuration that can be upgraded or patched.

In the steps that follow the example pool name is rpool, and the example names of the active boot environment (BEs) begin with s10BE*.

  1. Upgrade the system to the Solaris 10 10/08 release if it is running a previous Solaris 10 release.

    For more information about upgrading a system that is running the Solaris 10 release, see Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.

  2. Create the root pool.
    # zpool create rpool mirror c0t1d0 c1t1d0

    For information about the root pool requirements, see Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support.

  3. Confirm that the zones from the UFS environment are booted.
  4. Create the new ZFS boot environment.
    # lucreate -n s10BE2 -p rpool

    This command establishes datasets in the root pool for the new BE and copies the current BE (including the zones) to those datasets.

  5. Activate the new ZFS boot environment.
    # luactivate s10BE2

    Now, the system is running a ZFS root file system, but the zone roots on UFS are still in the UFS root file system. The next steps are required to fully migrate the UFS zones to a supported ZFS configuration.

  6. Reboot the system.
    # init 6
  7. Migrate the zones to a ZFS BE.
    1. Boot the zones.
    2. Create another ZFS BE within the pool.
      # lucreate s10BE3
    3. Activate the new boot environment.
      # luactivate s10BE3
    4. Reboot the system.
      # init 6

      This step verifies that the ZFS BE and the zones are booted.

  8. Resolve any potential mount-point problems.

    Due to a bug in Live Upgrade, the inactive BE might fail to boot because a ZFS dataset or a zone's ZFS dataset in the BE has an invalid mount point.

    1. Review the zfs list output.

      Look for incorrect temporary mount points. For example:

      # zfs list -r -o name,mountpoint rpool/ROOT/s10up
      
      NAME                               MOUNTPOINT
      rpool/ROOT/s10up                   /.alt.tmp.b-VP.mnt/
      rpool/ROOT/s10up/zones             /.alt.tmp.b-VP.mnt//zones
      rpool/ROOT/s10up/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

      The mount point for the root ZFS BE (rpool/ROOT/s10up) should be /.

    2. Reset the mount points for the ZFS BE and its datasets.

      For example:

      # zfs inherit -r mountpoint rpool/ROOT/s10up
      # zfs set mountpoint=/ rpool/ROOT/s10up
    3. Reboot the system.

      When the option to boot a specific BE is presented, either in the OpenBoot PROM prompt or the GRUB menu, select the BE whose mount points were just corrected.

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

This procedure explains how to set up a ZFS root file system and ZFS zone root configuration that can be upgraded or patched. In this configuration, the ZFS zone roots are created as ZFS datasets.

In the steps that follow, the example pool name is rpool and the example name of the active boot environment is s10BE. The name for the zones dataset can be any valid dataset name. In the following example, the zones dataset name is zones.

  1. Install the system with a ZFS root, either by using the interactive text installer or the JumpStart installation method.

    Depending on which installation method you choose, see either Installing a ZFS Root File System (Oracle Solaris Initial Installation) or Installing a ZFS Root File System ( JumpStart Installation).

  2. Boot the system from the newly created root pool.
  3. Create a dataset for grouping the zone roots.

    For example:

    # zfs create -o canmount=noauto rpool/ROOT/s10BE/zones

    Setting the noauto value for the canmount property prevents the dataset from being mounted other than by the explicit action of Live Upgrade and the system startup code.

  4. Mount the newly created zones dataset.
    # zfs mount rpool/ROOT/s10BE/zones

    The dataset is mounted at /zones.

  5. Create and mount a dataset for each zone root.
    # zfs create -o canmount=noauto rpool/ROOT/s10BE/zones/zonerootA
    # zfs mount rpool/ROOT/s10BE/zones/zonerootA
  6. Set the appropriate permissions on the zone root directory.
    # chmod 700 /zones/zonerootA
  7. Configure the zone, setting the zone path as follows:
    # zonecfg -z zoneA
        zoneA: No such zone configured
        Use 'create' to begin configuring a new zone.
        zonecfg:zoneA> create
        zonecfg:zoneA> set zonepath=/zones/zonerootA

    You can enable the zones to boot automatically when the system is booted by using the following syntax:

    zonecfg:zoneA> set autoboot=true
  8. Install the zone.
    # zoneadm -z zoneA install
  9. Boot the zone.
    # zoneadm -z zoneA boot

How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots on ZFS. These updates can consist either of a system upgrade or the application of patches.

In the steps that follow, newBE is the example name of the BE that is upgraded or patched.

  1. Create the BE to upgrade or patch.
    # lucreate -n newBE

    The existing BE, including all the zones, is cloned. A dataset is created for each dataset in the original BE. The new datasets are created in the same pool as the current root pool.

  2. Select one of the following to upgrade the system or apply patches to the new BE:
    • Upgrade the system.

      # luupgrade -u -n newBE -s /net/install/export/s10up/latest

      where the -s option specifies the location of the Oracle Solaris installation medium.

    • Apply patches to the new BE.

       # luupgrade -t -n newBE -t -s /patchdir 139147-02 157347-14
  3. Activate the new BE.
    # luactivate newBE
  4. Boot from the newly activated BE.
    # init 6
  5. Resolve any potential mount-point problems.

    Due to a bug in Live Upgrade, the inactive BE might fail to boot because a ZFS dataset or a zone's ZFS dataset in the BE has an invalid mount point.

    1. Review the zfs list output.

      Look for incorrect temporary mount points. For example:

      # zfs list -r -o name,mountpoint rpool/ROOT/newBE
      
      NAME                               MOUNTPOINT
      rpool/ROOT/newBE                   /.alt.tmp.b-VP.mnt/
      rpool/ROOT/newBE/zones             /.alt.tmp.b-VP.mnt/zones
      rpool/ROOT/newBE/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

      The mount point for the root ZFS BE (rpool/ROOT/newBE) should be /.

    2. Reset the mount points for the ZFS BE and its datasets.

      For example:

      # zfs inherit -r mountpoint rpool/ROOT/newBE
      # zfs set mountpoint=/ rpool/ROOT/newBE
    3. Reboot the system.

      When the option to boot a specific boot environment is presented either at the OpenBoot PROM prompt or the GRUB menu, select the boot environment whose mount points were just corrected.

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

You can use the Oracle Solaris Live Upgrade feature to migrate or upgrade a system with zones starting in the Solaris 10 10/08 release. Additional sparse (root and whole) zone configurations are supported by Live Upgrade starting in the Solaris 10 5/09 release.

This section describes how to configure a system with zones so that it can be upgraded and patched with Live Upgrade starting in the Solaris 10 5/09 release. If you are migrating to a ZFS root file system without zones, see Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones).

Consider the following points when using Oracle Solaris Live Upgrade with ZFS and zones starting in at least the Solaris 10 5/09 release:

If you are migrating or configuring a system with zones starting in the Solaris 10 5/09 release, review the following information:

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

Review the supported zone configurations before using Oracle Solaris Live Upgrade to migrate or upgrade a system with zones.

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

Use this procedure after you have performed an initial installation of at least the Solaris 10 5/09 release to create a ZFS root file system. Also use this procedure after you have used the luupgrade command to upgrade a ZFS root file system to at least the Solaris 10 5/09 release. A ZFS BE that is created using this procedure can then be upgraded or patched.

In the steps that follow, the example Oracle Solaris 10 9/10 system has a ZFS root file system and a zone root dataset in /rpool/zones. A ZFS BE named zfs2BE is created and can then be upgraded or patched.

  1. Review the existing ZFS file systems.
    # zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    rpool                 7.26G  59.7G    98K  /rpool
    rpool/ROOT            4.64G  59.7G    21K  legacy
    rpool/ROOT/zfsBE      4.64G  59.7G  4.64G  /
    rpool/dump            1.00G  59.7G  1.00G  -
    rpool/export            44K  59.7G    23K  /export
    rpool/export/home       21K  59.7G    21K  /export/home
    rpool/swap               1G  60.7G    16K  -
    rpool/zones            633M  59.7G   633M  /rpool/zones
  2. Ensure that the zones are installed and booted.
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       2 zfszone          running    /rpool/zones                   native   shared
  3. Create the ZFS BE.
    # lucreate -n zfs2BE
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
    Current boot environment is named <zfsBE>.
    Creating initial configuration for primary boot environment <zfsBE>.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Comparing source boot environment <zfsBE> file systems with the file 
    system(s) you specified for the new boot environment. Determining which 
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <zfs2BE>.
    Source boot environment is <zfsBE>.
    Creating boot environment <zfs2BE>.
    Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
    Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
    Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
    Population of boot environment <zfs2BE> successful.
    Creation of boot environment <zfs2BE> successful.
  4. Activate the ZFS BE.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    zfsBE                      yes      yes    yes       no     -         
    zfs2BE                     yes      no     no        yes    -         
    # luactivate zfs2BE
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>.
    .
    .
    .
  5. Boot the ZFS BE.
    # init 6
  6. Confirm that the ZFS file systems and zones are created in the new BE.
    # zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    rpool                             7.38G  59.6G    98K  /rpool
    rpool/ROOT                        4.72G  59.6G    21K  legacy
    rpool/ROOT/zfs2BE                 4.72G  59.6G  4.64G  /
    rpool/ROOT/zfs2BE@zfs2BE          74.0M      -  4.64G  -
    rpool/ROOT/zfsBE                  5.45M  59.6G  4.64G  /.alt.zfsBE
    rpool/dump                        1.00G  59.6G  1.00G  -
    rpool/export                        44K  59.6G    23K  /export
    rpool/export/home                   21K  59.6G    21K  /export/home
    rpool/swap                           1G  60.6G    16K  -
    rpool/zones                       17.2M  59.6G   633M  /rpool/zones
    rpool/zones-zfsBE                  653M  59.6G   633M  /rpool/zones-zfsBE
    rpool/zones-zfsBE@zfs2BE          19.9M      -   633M  -
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - zfszone          installed  /rpool/zones                   native   shared

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots in at least the Solaris 10 5/09 release. These updates can consist of either a system upgrade or the application of patches.

In the steps that follow, zfs2BEis the example name of the BE that is upgraded or patched.

  1. Review the existing ZFS file systems.
    # zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    rpool                             7.38G  59.6G   100K  /rpool
    rpool/ROOT                        4.72G  59.6G    21K  legacy
    rpool/ROOT/zfs2BE                 4.72G  59.6G  4.64G  /
    rpool/ROOT/zfs2BE@zfs2BE          75.0M      -  4.64G  -
    rpool/ROOT/zfsBE                  5.46M  59.6G  4.64G  /
    rpool/dump                        1.00G  59.6G  1.00G  -
    rpool/export                        44K  59.6G    23K  /export
    rpool/export/home                   21K  59.6G    21K  /export/home
    rpool/swap                           1G  60.6G    16K  -
    rpool/zones                       22.9M  59.6G   637M  /rpool/zones
    rpool/zones-zfsBE                  653M  59.6G   633M  /rpool/zones-zfsBE
    rpool/zones-zfsBE@zfs2BE          20.0M      -   633M  -
  2. Ensure that the zones are installed and booted.
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       5 zfszone          running    /rpool/zones                   native   shared
  3. Create the ZFS BE to upgrade or patch.
    # lucreate -n zfs2BE
    Analyzing system configuration.
    Comparing source boot environment <zfsBE> file systems with the file 
    system(s) you specified for the new boot environment. Determining which 
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <zfs2BE>.
    Source boot environment is <zfsBE>.
    Creating boot environment <zfs2BE>.
    Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
    Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
    Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
    Creating snapshot for <rpool/zones> on <rpool/zones@zfs10092BE>.
    Creating clone for <rpool/zones@zfs2BE> on <rpool/zones-zfs2BE>.
    Population of boot environment <zfs2BE> successful.
    Creation of boot environment <zfs2BE> successful.
  4. Select one of the following to upgrade the system or apply patches to the new BE:
    • Upgrade the system.

      # luupgrade -u -n zfs2BE -s /net/install/export/s10up/latest

      where the -s option specifies the location of the Oracle Solaris installation medium.

      This process can take a very long time.

      For a complete example of the luupgrade process, see Example 4-9.

    • Apply patches to the new BE.

      # luupgrade -t -n zfs2BE -t -s /patchdir patch-id-02 patch-id-04
  5. Activate the new boot environment.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    zfsBE                      yes      yes    yes       no     -         
    zfs2BE                     yes      no     no        yes    -    
    # luactivate zfs2BE
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>.
    .
    .
    .
  6. Boot from the newly activated boot environment.
    # init 6

Example 4-9 Upgrading a ZFS Root File System With a Zone Root to an Oracle Solaris 10 9/10 ZFS Root File System

In this example, a ZFS BE (zfsBE), which was created on a Solaris 10 10/09 system with a ZFS root file system and zone root in a non-root pool, is upgraded to the Oracle Solaris 10 9/10 release. This process can take a long time. Then, the upgraded BE (zfs2BE) is activated. Ensure that the zones are installed and booted before attempting the upgrade.

In this example, the zonepool pool, the /zonepool/zones dataset, and the zfszone zone are created as follows:

# zpool create zonepool mirror c2t1d0 c2t5d0
# zfs create zonepool/zones
# chmod 700 zonepool/zones
# zonecfg -z zfszone
zfszone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zfszone> create
zonecfg:zfszone> set zonepath=/zonepool/zones
zonecfg:zfszone> verify
zonecfg:zfszone> exit
# zoneadm -z zfszone install
cannot create ZFS dataset zonepool/zones: dataset already exists
Preparing to install zone <zfszone>.
Creating list of files to copy from the global zone.
Copying <8960> files to the zone.
.
.
.
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   2 zfszone          running    /zonepool/zones                native   shared

# lucreate -n zfsBE
.
.
.
# luupgrade -u -n zfsBE -s /net/install/export/s10up/latest
40410 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </net/system/export/s10up/latest/Solaris_10/Tools/Boot>
Validating the contents of the media </net/system/export/s10up/latest>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <zfsBE>.
Determining packages to install or upgrade for BE <zfsBE>.
Performing the operating system upgrade of the BE <zfsBE>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <zfsBE>.
Package information successfully updated on boot environment <zfsBE>.
Adding operating system patches to the BE <zfsBE>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot 
environment <zfsBE> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot 
environment <zfsBE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment <zfsBE>. Before you activate boot 
environment <zfsBE>, determine if any additional system maintenance is 
required or if additional media of the software distribution must be 
installed.
The Solaris upgrade of the boot environment <zfsBE> is complete.
Installing failsafe
Failsafe install is complete.
# luactivate zfs2BE
# init 6
# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -         
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - zfszone          installed  /zonepool/zones                native   shared

How to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

Use this procedure to migrate a system with a UFS root file system and a zone root to at least the Solaris 10 5/09 release. Then, use Live Upgrade to create a ZFS BE.

In the steps that follow, the example UFS BE name is c1t1d0s0, the UFS zone root is zonepool/zfszone, and the ZFS root BE is zfsBE.

  1. Upgrade the system to at least the Solaris 10 5/09 release if it is running a previous Solaris 10 release.

    For information about upgrading a system that is running the Solaris 10 release, see Oracle Solaris 10 1/13 Installation Guide: Live Upgrade and Upgrade Planning.

  2. Create the root pool.

    For information about the root pool requirements, see Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support.

  3. Confirm that the zones from the UFS environment are booted.
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       2 zfszone          running    /zonepool/zones                native   shared
  4. Create the new ZFS BE.
    # lucreate -c c1t1d0s0 -n zfsBE -p rpool

    This command establishes datasets in the root pool for the new BE and copies the current BE (including the zones) to those datasets.

  5. Activate the new ZFS BE.
    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    c1t1d0s0                   yes      no     no        yes    -         
    zfsBE                      yes      yes    yes       no     -         #
    luactivate zfsBE       
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
    .
    .
    .
  6. Reboot the system.
    # init 6
  7. Confirm that the ZFS file systems and zones are created in the new BE.
    # zfs list
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              6.17G  60.8G    98K  /rpool
    rpool/ROOT                         4.67G  60.8G    21K  /rpool/ROOT
    rpool/ROOT/zfsBE                   4.67G  60.8G  4.67G  /
    rpool/dump                         1.00G  60.8G  1.00G  -
    rpool/swap                          517M  61.3G    16K  -
    zonepool                            634M  7.62G    24K  /zonepool
    zonepool/zones                      270K  7.62G   633M  /zonepool/zones
    zonepool/zones-c1t1d0s0             634M  7.62G   633M  /zonepool/zones-c1t1d0s0
    zonepool/zones-c1t1d0s0@zfsBE       262K      -   633M  -
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - zfszone          installed  /zonepool/zones                native   shared

Example 4-10 Migrating a UFS Root File System With a Zone Root to a ZFS Root File System

In this example, an Oracle Solaris 10 9/10 system with a UFS root file system and a zone root (/uzone/ufszone), as well as a ZFS non-root pool (pool) and a zone root (/pool/zfszone), is migrated to a ZFS root file system. Ensure that the ZFS root pool is created and that the zones are installed and booted before attempting the migration.

# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   2 ufszone          running    /uzone/ufszone                 native   shared
   3 zfszone          running    /pool/zones/zfszone            native   shared
# lucreate -c ufsBE -n zfsBE -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Copying root of zone <ufszone> to </.alt.tmp.b-EYd.mnt/uzone/ufszone>.
Creating snapshot for <pool/zones/zfszone> on <pool/zones/zfszone@zfsBE>.
Creating clone for <pool/zones/zfszone@zfsBE> on <pool/zones/zfszone-zfsBE>.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-DLd.mnt
updating /.alt.tmp.b-DLd.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         
# luactivate zfsBE    
.
.
.
# init 6
.
.
.
# zfs list
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
pool                                    628M  66.3G    19K  /pool
pool/zones                              628M  66.3G    20K  /pool/zones
pool/zones/zfszone                     75.5K  66.3G   627M  /pool/zones/zfszone
pool/zones/zfszone-ufsBE                628M  66.3G   627M  /pool/zones/zfszone-ufsBE
pool/zones/zfszone-ufsBE@zfsBE           98K      -   627M  -
rpool                                  7.76G  59.2G    95K  /rpool
rpool/ROOT                             5.25G  59.2G    18K  /rpool/ROOT
rpool/ROOT/zfsBE                       5.25G  59.2G  5.25G  /
rpool/dump                             2.00G  59.2G  2.00G  -
rpool/swap                              517M  59.7G    16K  -
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - ufszone          installed  /uzone/ufszone                 native   shared
   - zfszone          installed  /pool/zones/zfszone            native   shared