Oracle Solaris ZFS Administration Guide

Chapter 5 Installing and Booting an Oracle Solaris ZFS Root File System

This chapter describes how to install and boot a Oracle Solaris ZFS file system. Migrating a UFS root file system to a ZFS file system by using Oracle Solaris Live Upgrade is also covered.

The following sections are provided in this chapter:

For a list of known issues in this release, see Oracle Solaris 10 9/10 Release Notes.

For up-to-date troubleshooting information, go to the following site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Installing and Booting an Oracle Solaris ZFS Root File System (Overview)

Starting in the Solaris 10 10/08 release, you can install and boot from a ZFS root file system in the following ways:

After a SPARC based or an x86 based system is installed with or migrated to a ZFS root file system, the system boots automatically from the ZFS root file system. For more information about boot changes, see Booting From a ZFS Root File System.

ZFS Installation Features

The following ZFS installation features are provided in this Solaris release:

The following installation features are not provided in this release:

Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support

Ensure that the following requirements are met before attempting to install a system with a ZFS root file system or attempting to migrate a UFS root file system to a ZFS root file system.

Oracle Solaris Release Requirements

You can install and boot a ZFS root file system or migrate to a ZFS root file system in the following ways:

General ZFS Storage Pool Requirements

The following sections describe ZFS root pool space and configuration requirements.

Disk Space Requirements for ZFS Storage Pools

The required minimum amount of available pool space for a ZFS root file system is larger than for a UFS root file system because swap and dump devices must be separate devices in a ZFS root environment. By default, swap and dump devices are the same device in a UFS root file system.

When a system is installed or upgraded with a ZFS root file system, the size of the swap area and the dump device are dependent upon the amount of physical memory. The minimum amount of available pool space for a bootable ZFS root file system depends upon the amount of physical memory, the disk space available, and the number of boot environments (BEs) to be created.

Review the following disk space requirements for ZFS storage pools:

ZFS Storage Pool Configuration Requirements

Review the following ZFS storage pool configuration requirements:

Installing a ZFS Root File System (Initial Installation)

In this Solaris release, you can perform an initial installation by using the Solaris interactive text installer to create a ZFS storage pool that contains a bootable ZFS root file system. If you have an existing ZFS storage pool that you want to use for your ZFS root file system, then you must use Oracle Solaris Live Upgrade to migrate your existing UFS root file system to a ZFS root file system in an existing ZFS storage pool. For more information, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).

If you will be configuring zones after the initial installation of a ZFS root file system and you plan on patching or upgrading the system, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) or Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).

If you already have ZFS storage pools on the system, they are acknowledged by the following message. However, these pools remain untouched, unless you select the disks in the existing pools to create the new storage pool.


There are existing ZFS pools available on this system.  However, they can only be upgraded 
using the Live Upgrade tools.  The following screens will only allow you to install a ZFS root system, 
not upgrade one.

Caution – Caution –

Existing pools will be destroyed if any of their disks are selected for the new pool.


Before you begin the initial installation to create a ZFS storage pool, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.


Example 5–1 Initial Installation of a Bootable ZFS Root File System

The Solaris interactive text installation process is basically the same as in previous Solaris releases, except that you are prompted to create a UFS or a ZFS root file system. UFS is the still the default file system in this release. If you select a ZFS root file system, you are prompted to create a ZFS storage pool. The steps for installing a ZFS root file system follow:

  1. Select the Solaris interactive installation method because a Solaris Flash installation is not available to create a bootable ZFS root file system. However, you can create a ZFS flash archive to be used during a JumpStart installation. For more information, see Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation).

    Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system as long as at least the Solaris 10 10/08 release is already installed. For more information about migrating to a ZFS root file system, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).

  2. To create a ZFS root file system, select the ZFS option. For example:


    Choose Filesystem Type
    
      Select the filesystem to use for your Solaris installation
    
    
                [ ] UFS
                [X] ZFS
  3. After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous Solaris releases.


    Select Disks
    
      On this screen you must select the disks for installing Solaris software.
      Start by looking at the Suggested Minimum field; this value is the
      approximate space needed to install the software you've selected. For ZFS,
      multiple disks will be configured as mirrors, so the disk you choose, or the
      slice within the disk must exceed the Suggested Minimum value.
      NOTE: ** denotes current boot disk
    
      Disk Device                                              Available Space
      =============================================================================
      [X]    c1t0d0                                           69994 MB  (F4 to edit)
      [ ]    c1t1d0                                           69994 MB
      [-]    c1t2d0                                               0 MB
      [-]    c1t3d0                                               0 MB
    
                                      Maximum Root Size:  69994 MB
                                      Suggested Minimum:   8279 MB

    You can select the disk or disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or a three-disk mirrored pool is optimal. If you have eight disks and you select all of them, those eight disks are used for the root pool as one big mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported. For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.

  4. To select two disks to create a mirrored root pool, use the cursor control keys to select the second disk. In the following example, both c1t1d0 and c1t2d0 are selected as the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label or they don't contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.


    Select Disks
    
      On this screen you must select the disks for installing Solaris software.
      Start by looking at the Suggested Minimum field; this value is the
      approximate space needed to install the software you've selected. For ZFS,
      multiple disks will be configured as mirrors, so the disk you choose, or the
      slice within the disk must exceed the Suggested Minimum value.
      NOTE: ** denotes current boot disk
    
      Disk Device                                              Available Space
      =============================================================================
      [X]    c1t0d0                                           69994 MB  
      [X]    c1t1d0                                           69994 MB  (F4 to edit)
      [-]    c1t2d0                                               0 MB
      [-]    c1t3d0                                               0 MB
    
                                      Maximum Root Size:  69994 MB
                                      Suggested Minimum:   8279 MB

    If the Available Space column identifies 0 MB, the disk most likely has an EFI label. If you want to use a disk with an EFI label, you will need to exit the installation program, relabel the disk with an SMI label by using the format -e command, then restart the installation program.

    If you do not create a mirrored root pool during installation, you can easily create one after the installation. For information, see How to Create a Mirrored Root Pool (Post Installation).

  5. After you have selected a disk or disks for your ZFS storage pool, a screen similar to the following is displayed:


    Configure ZFS Settings
    
      Specify the name of the pool to be created from the disk(s) you have chosen.
      Also specify the name of the dataset to be created within the pool that is
      to be used as the root directory for the filesystem.
    
                  ZFS Pool Name: rpool                                   
          ZFS Root Dataset Name: s10s_u9wos_08
          ZFS Pool Size (in MB): 69995
      Size of Swap Area (in MB): 2048
      Size of Dump Area (in MB): 1536
            (Pool size must be between 6231 MB and 69995 MB)
    
                             [X] Keep / and /var combined
                             [ ] Put /var on a separate dataset

    From this screen, you can change the name of the ZFS pool, the dataset name, the pool size, and the swap and dump device sizes by moving the cursor control keys through the entries and replacing the default value with new values. Or, you can accept the default values. In addition, you can modify how the /var file system is created and mounted.

    In this example, the root dataset name is changed to zfsBE.


                  ZFS Pool Name: rpool
          ZFS Root Dataset Name: zfsBE                                   
          ZFS Pool Size (in MB): 69995
      Size of Swap Area (in MB): 2048
      Size of Dump Area (in MB): 1536
            (Pool size must be between 6231 MB and 69995 MB)
    
                             [X] Keep / and /var combined
                             [ ] Put /var on a separate dataset
  6. You can change the installation profile at this final installation screen. For example:


    Profile
    
      The information shown below is your profile for installing Solaris software.
      It reflects the choices you've made on previous screens.
    
      ============================================================================
    
                    Installation Option: Initial
                            Boot Device: c1t0d0
                  Root File System Type: ZFS
                        Client Services: None
    
                                Regions: North America
                          System Locale: C ( C )
    
                               Software: Solaris 10, Entire Distribution
                              Pool Name: rpool
                  Boot Environment Name: zfsBE
                              Pool Size: 69995 MB
                        Devices in Pool: c1t0d0
                                         c1t1d0
  7. After the installation is completed, review the resulting ZFS storage pool and file system information. For example:


    # zpool status
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    # zfs list
    NAME                USED  AVAIL  REFER  MOUNTPOINT
    rpool              8.03G  58.9G    96K  /rpool
    rpool/ROOT         4.47G  58.9G    21K  legacy
    rpool/ROOT/zfsBE   4.47G  58.9G  4.47G  /
    rpool/dump         1.50G  58.9G  1.50G  -
    rpool/export         44K  58.9G    23K  /export
    rpool/export/home    21K  58.9G    21K  /export/home
    rpool/swap         2.06G  61.0G    16K  -

    The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.

  8. To create another ZFS boot environment (BE) in the same storage pool, you can use the lucreate command. In the following example, a new BE named zfs2BE is created. The current BE is named zfsBE, as shown in the zfs list output. However, the current BE is not acknowledged in the lustatus output until the new BE is created.


    # lustatus
    ERROR: No boot environments are configured on this system
    ERROR: cannot determine list of all boot environment names

    If you create a new ZFS BE in the same pool, use syntax similar to the following:


    # lucreate -n zfs2BE
    INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
    Current boot environment is named <zfsBE>.
    Creating initial configuration for primary boot environment <zfsBE>.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Comparing source boot environment <zfsBE> file systems with the file 
    system(s) you specified for the new boot environment. Determining which 
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <zfs2BE>.
    Source boot environment is <zfsBE>.
    Creating boot environment <zfs2BE>.
    Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
    Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
    Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
    Population of boot environment <zfs2BE> successful.
    Creation of boot environment <zfs2BE> successful.

    Creating a ZFS BE within the same pool uses ZFS clone and snapshot features to instantly create the BE. For more details about using Oracle Solaris Live Upgrade for a ZFS root migration, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).

  9. Next, verify the new boot environments. For example:


    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    zfsBE                      yes      yes    yes       no     -         
    zfs2BE                     yes      no     no        yes    -        
    # zfs list
    NAME                      USED  AVAIL  REFER  MOUNTPOINT
    rpool                    8.03G  58.9G    97K  /rpool
    rpool/ROOT               4.47G  58.9G    21K  legacy
    rpool/ROOT/zfs2BE         116K  58.9G  4.47G  /
    rpool/ROOT/zfsBE         4.47G  58.9G  4.47G  /
    rpool/ROOT/zfsBE@zfs2BE  75.5K      -  4.47G  -
    rpool/dump               1.50G  58.9G  1.50G  -
    rpool/export               44K  58.9G    23K  /export
    rpool/export/home          21K  58.9G    21K  /export/home
    rpool/swap               2.06G  61.0G    16K  -
  10. To boot from an alternate BE, use the luactivate command. After you activate the BE on a SPARC based system, use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool. When booting from an x86 based system, identify the BE to be booted from the GRUB menu.

    For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfs2BE, select option 2. Then, type the displayed boot -Z command.


    ok boot -L
    Executing last command: boot -L                                       
    Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0  File and args: -L
    1 zfsBE
    2 zfs2BE
    Select environment to boot: [ 1 - 2 ]: 2
    
    To boot the selected entry, invoke:
    boot [<root-device>] -Z rpool/ROOT/zfs2BE
    ok boot -Z rpool/ROOT/zfs2BE
    

For more information about booting a ZFS file system, see Booting From a ZFS Root File System.


ProcedureHow to Create a Mirrored Root Pool (Post Installation)

If you did not create a mirrored ZFS root pool during installation, you can easily create one after the installation.

For information about replacing a disk in root pool, see How to Replace a Disk in the ZFS Root Pool.

  1. Display your current root pool status.


    # zpool status rpool
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c1t0d0s0  ONLINE       0     0     0
    
    errors: No known data errors
  2. Attach a second disk to configure a mirrored root pool.


    # zpool attach rpool c1t0d0s0 c1t1d0s0
    Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable.
    Make sure to wait until resilver is done before rebooting.
  3. View the root pool status to confirm that resilvering is complete.


    # zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h1m, 24.26% done, 0h3m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0  3.18G resilvered
    
    errors: No known data errors

    In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:


    scrub: resilver completed after 0h10m with 0 errors on Thu Mar 11 11:27:22 2010
  4. Apply boot blocks to the second disk after resilvering is complete.


    sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    

    x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
    
  5. Verify that you can boot successfully from the second disk.

  6. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM. Or, reconfigure the PC BIOS.

Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)

Starting in the Solaris 10 10/09 release, you can create a flash archive on a system that is running a UFS root file system or a ZFS root file system. A flash archive of a ZFS root pool contains the entire pool hierarchy, except for the swap and dump volumes, and any excluded datasets. The swap and dump volumes are created when the flash archive is installed. You can use the flash archive installation method as follows:

Review the following limitations before you consider installing a system with a ZFS flash archive:

After a master system is installed with or upgraded to at least the Solaris 10 10/09 release, you can create a ZFS flash archive to be used to install a target system. The basic process follows:

The following archive options are supported for installing a ZFS root pool with a flash archive:

After a ZFS flash archive is installed, the system is configured as follows:


Example 5–2 Installing a System with a ZFS Flash Archive

After the master system is installed or upgraded to at least the Solaris 10 10/09 release, create a flash archive of the ZFS root pool. For example:


# flarcreate -n zfsBE zfs10upflar
Full Flash
Checking integrity...
Integrity OK.
Running precreation scripts...
Precreation scripts done.
Determining the size of the archive...
The archive will be approximately 4.94GB.
Creating the archive...
Archive creation complete.
Running postcreation scripts...
Postcreation scripts done.

Running pre-exit scripts...
Pre-exit scripts done.

On the system that will be used as the installation server, create a JumpStart profile as you would to install any system. For example, the following profile is used to install the zfs10upflar archive.


install_type flash_install
archive_location nfs system:/export/jump/zfs10upflar
partitioning explicit
pool rpool auto auto auto mirror c0t1d0s0 c0t0d0s0

Installing a ZFS Root File System (Oracle Solaris JumpStart Installation)

You can create a JumpStart profile to install a ZFS root file system or a UFS root file system.

A ZFS specific profile must contain the new pool keyword. The pool keyword installs a new root pool, and a new boot environment is created by default. You can provide the name of the boot environment as well as create a separate /var dataset with the bootenv installbe keywords and the bename and dataset options.

For general information about using JumpStart features, see Oracle Solaris 10 9/10 Installation Guide: Custom JumpStart and Advanced Installations.

If you will be configuring zones after the JumpStart installation of a ZFS root file system and you plan on patching or upgrading the system, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) or Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).

JumpStart Keywords for ZFS

The following keywords are permitted in a ZFS specific profile:

auto

Automatically specifies the size of the slices for the pool, swap volume, or dump volume. The size of the disk is checked to verify that the minimum size can be accommodated. If the minimum size can be accommodated, the largest possible pool size is allocated, given the constraints, such as the size of the disks, preserved slices, and so on.

For example, if you specify c0t0d0s0, the root pool slice is created as large as possible if you specify either the all or auto keywords. Or, you can specify a particular size for the slice, swap volume, or dump volume.

The auto keyword works similarly to the all keyword when used with a ZFS root pool because pools don't have unused disk space.

bootenv

Identifies the boot environment characteristics.

Use the following bootenv keyword syntax to create a bootable ZFS root environment:

bootenv installbe bename BE-name [dataset mount-point]

installbe

Creates a new BE that is identified by the bename option and BE-name entry and installs it.

bename BE-name

Identifies the BE-name to install.

If bename is not used with the pool keyword, then a default BE is created.

dataset mount-point

Use the optional dataset keyword to identify a /var dataset that is separate from the root dataset. The mount-point value is currently limited to /var. For example, a bootenv syntax line for a separate /var dataset would be similar to the following:


bootenv installbe bename zfsroot dataset /var
pool

Defines the new root pool to be created. The following keyword syntax must be provided:


pool poolname poolsize swapsize dumpsize vdevlist
poolname

Identifies the name of the pool to be created. The pool is created with the specified pool size and with the specified physical devices (vdevs). The poolname value should not identify the name of an existing pool or the existing pool is overwritten.

poolsize

Specifies the size of the pool to be created. The value can be auto or existing. The auto value allocates the largest possible pool size, given the constraints, such as size of the disks, preserved slices, and so on. The existing value means the boundaries of existing slices by that name are preserved and not overwritten. The size is assumed to be in MB, unless specified by g (GB).

swapsize

Specifies the size of the swap volume to be created. The autovalue means that the default swap size is used. You can specify a size with a size value. The size is in MB, unless specified by g (GB).

dumpsize

Specifies the size of the dump volume to be created. The auto value means that the default swap size is used. You can specify a size with a sizevalue. The size is assumed to be in MB, unless specified by g (GB).

vdevlist

Specifies one or more devices that are used to create the pool. The format of vdevlist is the same as the format of the zpool create command. At this time, only mirrored configurations are supported when multiple devices are specified. Devices in vdevlist must be slices for the root pool. The any value means that the installation software selects a suitable device.

You can mirror as many disks as you like, but the size of the pool that is created is determined by the smallest of the specified disks. For more information about creating mirrored storage pools, see Mirrored Storage Pool Configuration.

JumpStart Profile Examples for ZFS

This section provides examples of ZFS specific JumpStart profiles.

The following profile performs an initial installation specified with install_type initial_install in a new pool, identified with pool newpool, whose size is automatically sized with the auto keyword to the size of the specified disks. The swap area and dump device are automatically sized with the auto keyword in a mirrored configuration of disks (with the mirror keyword and disks specified as c0t0d0s0 and c0t1d0s0). Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named s10-xx is created.


install_type initial_install
pool newpool auto auto auto mirror c0t0d0s0 c0t1d0s0
bootenv installbe bename s10-xx

The following profile performs an initial installation with the keyword install_type initial_install of the SUNWCall metacluster in a new pool called newpool, which is 80 GBs in size. This pool is created with a 2-GB swap volume and a 2-GB dump volume, in a mirrored configuration of any two available devices that are large enough to create an 80-GB pool. If two such devices aren't available, the installation fails. Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named s10–xx is created.


install_type initial_install
cluster SUNWCall
pool newpool 80g 2g 2g mirror any any
bootenv installbe bename s10-xx

JumpStart installation syntax enables you to preserve or create a UFS file system on a disk that also includes a ZFS root pool. This configuration is not recommended for production systems, but could be used for transition or migration needs on a small system, such as a laptop.

JumpStart Issues for ZFS

Consider the following issues before starting a JumpStart installation of a bootable ZFS root file system:

Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade)

Oracle Solaris Live Upgrade features related to UFS components are still available, and they work as in previous Solaris releases.

The following features are also available:

For detailed information about Oracle Solaris installation and Oracle Solaris Live Upgrade features, see the Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

The basic process for migrating a UFS root file system to a ZFS root file system follows:

For information about ZFS and Oracle Solaris Live Upgrade requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.

ZFS Migration Issues With Oracle Solaris Live Upgrade

Review the following issues before you use Oracle Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system:

Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones)

The following examples show how to migrate a UFS root file system to a ZFS root file system.

If you are migrating or updating a system with zones, see the following sections:


Example 5–3 Using Oracle Solaris Live Upgrade to Migrate a UFS Root File System to a ZFS Root File System

The following example shows how to create a BE of a ZFS root file system from a UFS root file system. The current BE, ufsBE, which contains a UFS root file system, is identified by the -c option. If you do not include the optional -c option, the current BE name defaults to the device name. The new BE, zfsBE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation.

The ZFS storage pool must be created with slices rather than with whole disks to be upgradeable and bootable. Before you create the new pool, ensure that the disks to be used in the pool have an SMI (VTOC) label instead of an EFI label. If the disk is relabeled with an SMI label, ensure that the labeling process did not change the partitioning scheme. In most cases, all of the disk's capacity should be in the slice that is intended for the root pool.


# zpool create rpool mirror c1t2d0s0 c2t1d0s0
# lucreate -c ufsBE -n zfsBE -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsBE>.
Creating initial configuration for primary boot environment <ufsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t2d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-qD.mnt
updating /.alt.tmp.b-qD.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.

After the lucreate operation completes, use the lustatus command to view the BE status. For example:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         

Then, review the list of ZFS components. For example:


# zfs list
NAME                   USED  AVAIL  REFER  MOUNTPOINT
rpool                 7.17G  59.8G  95.5K  /rpool
rpool/ROOT            4.66G  59.8G    21K  /rpool/ROOT
rpool/ROOT/zfsBE      4.66G  59.8G  4.66G  /
rpool/dump               2G  61.8G    16K  -
rpool/swap             517M  60.3G    16K  -

Next, use the luactivate command to activate the new ZFS BE. For example:


# luactivate zfsBE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
.
.
.
Modifying boot archive service
Activation of boot environment <zfsBE> successful.

Next, reboot the system to the ZFS BE.


# init 6

Confirm that the ZFS BE is active.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      no     no        yes    -         
zfsBE                      yes      yes    yes       no     -      

If you switch back to the UFS BE, you must re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.

If the UFS BE is no longer required, you can remove it with the ludelete command.



Example 5–4 Using Oracle Solaris Live Upgrade to Create a ZFS BE From a ZFS BE

Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides on the same ZFS pool, the -p option is omitted.

If you have multiple ZFS BEs, do the following to select which BE to boot from:

For more information, see Example 5–9.


# lucreate -n zfs2BE
Analyzing system configuration.
No name for current boot environment.
INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <zfsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfs2BE>.
Source boot environment is <zfsBE>.
Creating boot environment <zfs2BE>.
Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
Population of boot environment <zfs2BE> successful.
Creation of boot environment <zfs2BE> successful.


Example 5–5 Upgrading Your ZFS BE (luupgrade)

You can upgrade your ZFS BE with additional packages or patches.

The basic process follows:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -   
# luupgrade -p -n zfsBE -s /net/system/export/s10up/Solaris_10/Product SUNWchxge
Validating the contents of the media </net/install/export/s10up/Solaris_10/Product>.
Mounting the BE <zfsBE>.
Adding packages to the BE <zfsBE>.

Processing package instance <SUNWchxge> from </net/install/export/s10up/Solaris_10/Product>

Chelsio N110 10GE NIC Driver(sparc) 11.10.0,REV=2006.02.15.20.41
Copyright (c) 2010, Oracle and/or its affiliates. All rights reserved.

This appears to be an attempt to install the same architecture and
version of a package which is already installed.  This installation
will attempt to overwrite this package.

Using </a> as the package base directory.
## Processing package information.
## Processing system information.
   4 package pathnames are already properly installed.
## Verifying package dependencies.
## Verifying disk space requirements.
## Checking for conflicts with packages already installed.
## Checking for setuid/setgid programs.

This package contains scripts which will be executed with super-user
permission during the process of installing this package.

Do you want to continue with the installation of <SUNWchxge> [y,n,?] y
Installing Chelsio N110 10GE NIC Driver as <SUNWchxge>

## Installing part 1 of 1.
## Executing postinstall script.

Installation of <SUNWchxge> was successful.
Unmounting the BE <zfsBE>.
The package add to the BE <zfsBE> completed.

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

You can use Oracle Solaris Live Upgrade to migrate a system with zones, but the supported configurations are limited in the Solaris 10 10/08 release. If you are installing or upgrading to at least the Solaris 10 5/09 release, more zone configurations are supported. For more information, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).

This section describes how to configure and install a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).

If you are migrating a system with zones or if you are configuring a system with zones in the Solaris 10 10/08 release, review the following procedures:

Follow these recommended procedures to set up zones on a system with a ZFS root file system to ensure that you can use Oracle Solaris Live Upgrade on that system.

ProcedureHow to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

This procedure explains how to migrate a UFS root file system with zones installed to a ZFS root file system and ZFS zone root configuration that can be upgraded or patched.

In the steps that follow the example pool name is rpool, and the example name of the active boot environment is s10BE*.

  1. Upgrade the system to the Solaris 10 10/08 release if it is running a previous Solaris 10 release.

    For more information about upgrading a system is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  2. Create the root pool.


    # zpool create rpool mirror c0t1d0 c1t1d0
    

    For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.

  3. Confirm that the zones from the UFS environment are booted.

  4. Create the new ZFS boot environment.


    # lucreate -n s10BE2 -p rpool
    

    This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.

  5. Activate the new ZFS boot environment.


    # luactivate s10BE2
    

    Now, the system is running a ZFS root file system, but the zone roots on UFS are still in the UFS root file system. The next steps are required to fully migrate the UFS zones to a supported ZFS configuration.

  6. Reboot the system.


    # init 6
    
  7. Migrate the zones to a ZFS BE.

    1. Boot the zones.

    2. Create another ZFS BE within the pool.


      # lucreate s10BE3
      
    3. Activate the new boot environment.


      # luactivate s10BE3
      
    4. Reboot the system.


      # init 6
      

      This step verifies that the ZFS BE and the zones are booted.

  8. Resolve any potential mount-point problems.

    Due to a bug in Oracle Solaris Live Upgrade, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.

    1. Review the zfs list output.

      Look for incorrect temporary mount points. For example:


      # zfs list -r -o name,mountpoint rpool/ROOT/s10u6
      
      NAME                               MOUNTPOINT
      rpool/ROOT/s10u6                   /.alt.tmp.b-VP.mnt/
      rpool/ROOT/s10u6/zones             /.alt.tmp.b-VP.mnt//zones
      rpool/ROOT/s10u6/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

      The mount point for the root ZFS BE (rpool/ROOT/s10u6) should be /.

    2. Reset the mount points for the ZFS BE and its datasets.

      For example:


      # zfs inherit -r mountpoint rpool/ROOT/s10u6
      # zfs set mountpoint=/ rpool/ROOT/s10u6
      
    3. Reboot the system.

      When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.

ProcedureHow to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

This procedure explains how to set up a ZFS root file system and ZFS zone root configuration that can be upgraded or patched. In this configuration, the ZFS zone roots are created as ZFS datasets.

In the steps that follow the example pool name is rpool and the example name of the active boot environment is s10BE. The name for the zones dataset can be any legal dataset name. In the following example, the zones dataset name is zones.

  1. Install the system with a ZFS root, either by using the Solaris interactive text installer or the Solaris JumpStart installation method.

    For information about installing a ZFS root file system by using the initial installation method or the Solaris JumpStart method, see Installing a ZFS Root File System (Initial Installation) or Installing a ZFS Root File System (Oracle Solaris JumpStart Installation).

  2. Boot the system from the newly created root pool.

  3. Create a dataset for grouping the zone roots.

    For example:


    # zfs create -o canmount=noauto rpool/ROOT/s10BE/zones
    

    Setting the noauto value for the canmount property prevents the dataset from being mounted other than by the explicit action of Oracle Solaris Live Upgrade and system startup code.

  4. Mount the newly created zones dataset.


    # zfs mount rpool/ROOT/s10BE/zones
    

    The dataset is mounted at /zones.

  5. Create and mount a dataset for each zone root.


    # zfs create -o canmount=noauto rpool/ROOT/s10BE/zones/zonerootA
    # zfs mount rpool/ROOT/s10BE/zones/zonerootA
    
  6. Set the appropriate permissions on the zone root directory.


    # chmod 700 /zones/zonerootA
    
  7. Configure the zone, setting the zone path as follows:


    # zonecfg -z zoneA
        zoneA: No such zone configured
        Use 'create' to begin configuring a new zone.
        zonecfg:zoneA> create
        zonecfg:zoneA> set zonepath=/zones/zonerootA
    

    You can enable the zones to boot automatically when the system is booted by using the following syntax:


    zonecfg:zoneA> set autoboot=true
    
  8. Install the zone.


    # zoneadm -z zoneA install
    
  9. Boot the zone.


    # zoneadm -z zoneA boot
    

ProcedureHow to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots on ZFS. These updates can either be a system upgrade or the application of patches.

In the steps that follow, newBE is the example name of the boot environment that is upgraded or patched.

  1. Create the boot environment to upgrade or patch.


    # lucreate -n newBE
    

    The existing boot environment, including all the zones, is cloned. A dataset is created for each dataset in the original boot environment. The new datasets are created in the same pool as the current root pool.

  2. Select one of the following to upgrade the system or apply patches to the new boot environment:

    • Upgrade the system.


      # luupgrade -u -n newBE -s /net/install/export/s10u7/latest
      

      where the -s option specifies the location of the Solaris installation medium.

    • Apply patches to the new boot environment.


       # luupgrade -t -n newBE -t -s /patchdir 139147-02 157347-14
      
  3. Activate the new boot environment.


    # luactivate newBE
    
  4. Boot from the newly activated boot environment.


    # init 6
    
  5. Resolve any potential mount-point problems.

    Due to a bug in Oracle Solaris Live Upgrade feature, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point.

    1. Review the zfs list output.

      Look for incorrect temporary mount points. For example:


      # zfs list -r -o name,mountpoint rpool/ROOT/newBE
      
      NAME                               MOUNTPOINT
      rpool/ROOT/newBE                   /.alt.tmp.b-VP.mnt/
      rpool/ROOT/newBE/zones             /.alt.tmp.b-VP.mnt/zones
      rpool/ROOT/newBE/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

      The mount point for the root ZFS BE (rpool/ROOT/newBE) should be /.

    2. Reset the mount points for the ZFS BE and its datasets.

      For example:


      # zfs inherit -r mountpoint rpool/ROOT/newBE
      # zfs set mountpoint=/ rpool/ROOT/newBE
      
    3. Reboot the system.

      When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

You can use the Oracle Solaris Live Upgrade feature to migrate or upgrade a system with zones starting in the Solaris 10 10/08 release. Additional sparse—root and whole—root zone configurations are supported by Live Upgrade starting in the Solaris 10 5/09 release.

This section describes how to configure a system with zones so that it can be upgraded and patched with Oracle Solaris Live Upgrade starting in the Solaris 10 5/09 release. If you are migrating to a ZFS root file system without zones, see Using Oracle Solaris Live Upgrade to Migrate to a ZFS Root File System (Without Zones).

Consider the following points when using Oracle Solaris Live Upgrade with ZFS and zones starting in at least the Solaris 10 5/09 release:

If you are migrating or configuring a system with zones starting in the Solaris 10 5/09 release, review the following information:

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

Review the supported zone configurations before using Oracle Solaris Live Upgrade to migrate or upgrade a system with zones.

ProcedureHow to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

Use this procedure after you have performed an initial installation of at least the Solaris 10 5/09 release to create a ZFS root file system. Also use this procedure after you have used the luupgrade feature to upgrade a ZFS root file system to at least the Solaris 10 5/09 release. A ZFS BE that is created using this procedure can then be upgraded or patched.

In the steps that follow, the example Oracle Solaris 10 9/10 system has a ZFS root file system and a zone root dataset in /rpool/zones. A ZFS BE named zfs2BE is created and can then be upgraded or patched.

  1. Review the existing ZFS file systems.


    # zfs list
    NAME                   USED  AVAIL  REFER  MOUNTPOINT
    rpool                 7.26G  59.7G    98K  /rpool
    rpool/ROOT            4.64G  59.7G    21K  legacy
    rpool/ROOT/zfsBE      4.64G  59.7G  4.64G  /
    rpool/dump            1.00G  59.7G  1.00G  -
    rpool/export            44K  59.7G    23K  /export
    rpool/export/home       21K  59.7G    21K  /export/home
    rpool/swap               1G  60.7G    16K  -
    rpool/zones            633M  59.7G   633M  /rpool/zones
  2. Ensure that the zones are installed and booted.


    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       2 zfszone          running    /rpool/zones                   native   shared
  3. Create the ZFS BE.


    # lucreate -n zfs2BE
    Analyzing system configuration.
    No name for current boot environment.
    INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
    Current boot environment is named <zfsBE>.
    Creating initial configuration for primary boot environment <zfsBE>.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Comparing source boot environment <zfsBE> file systems with the file 
    system(s) you specified for the new boot environment. Determining which 
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <zfs2BE>.
    Source boot environment is <zfsBE>.
    Creating boot environment <zfs2BE>.
    Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
    Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
    Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
    Population of boot environment <zfs2BE> successful.
    Creation of boot environment <zfs2BE> successful.
  4. Activate the ZFS BE.


    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    zfsBE                      yes      yes    yes       no     -         
    zfs2BE                     yes      no     no        yes    -         
    # luactivate zfs2BE
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>.
    .
    .
    .
    # init 6
    
  5. Confirm that the ZFS file systems and zones are created in the new BE.


    # zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    rpool                             7.38G  59.6G    98K  /rpool
    rpool/ROOT                        4.72G  59.6G    21K  legacy
    rpool/ROOT/zfs2BE                 4.72G  59.6G  4.64G  /
    rpool/ROOT/zfs2BE@zfs2BE          74.0M      -  4.64G  -
    rpool/ROOT/zfsBE                  5.45M  59.6G  4.64G  /.alt.zfsBE
    rpool/dump                        1.00G  59.6G  1.00G  -
    rpool/export                        44K  59.6G    23K  /export
    rpool/export/home                   21K  59.6G    21K  /export/home
    rpool/swap                           1G  60.6G    16K  -
    rpool/zones                       17.2M  59.6G   633M  /rpool/zones
    rpool/zones-zfsBE                  653M  59.6G   633M  /rpool/zones-zfsBE
    rpool/zones-zfsBE@zfs2BE          19.9M      -   633M  -
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - zfszone          installed  /rpool/zones                   native   shared

ProcedureHow to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

Use this procedure when you need to upgrade or patch a ZFS root file system with zone roots in at least the Solaris 10 5/09 release. These updates can either be a system upgrade or the application of patches.

In the steps that follow, zfs2BE, is the example name of the boot environment that is upgraded or patched.

  1. Review the existing ZFS file systems.


    # zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    rpool                             7.38G  59.6G   100K  /rpool
    rpool/ROOT                        4.72G  59.6G    21K  legacy
    rpool/ROOT/zfs2BE                 4.72G  59.6G  4.64G  /
    rpool/ROOT/zfs2BE@zfs2BE          75.0M      -  4.64G  -
    rpool/ROOT/zfsBE                  5.46M  59.6G  4.64G  /
    rpool/dump                        1.00G  59.6G  1.00G  -
    rpool/export                        44K  59.6G    23K  /export
    rpool/export/home                   21K  59.6G    21K  /export/home
    rpool/swap                           1G  60.6G    16K  -
    rpool/zones                       22.9M  59.6G   637M  /rpool/zones
    rpool/zones-zfsBE                  653M  59.6G   633M  /rpool/zones-zfsBE
    rpool/zones-zfsBE@zfs2BE          20.0M      -   633M  -
  2. Ensure that the zones are installed and booted.


    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       5 zfszone          running    /rpool/zones                   native   shared
  3. Create the ZFS BE to upgrade or patch.


    # lucreate -n zfs2BE
    Analyzing system configuration.
    Comparing source boot environment <zfsBE> file systems with the file 
    system(s) you specified for the new boot environment. Determining which 
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <zfs2BE>.
    Source boot environment is <zfsBE>.
    Creating boot environment <zfs2BE>.
    Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
    Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
    Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
    Creating snapshot for <rpool/zones> on <rpool/zones@zfs10092BE>.
    Creating clone for <rpool/zones@zfs2BE> on <rpool/zones-zfs2BE>.
    Population of boot environment <zfs2BE> successful.
    Creation of boot environment <zfs2BE> successful.
  4. Select one of the following to upgrade the system or apply patches to the new boot environment:

    • Upgrade the system.


      # luupgrade -u -n zfs2BE -s /net/install/export/s10up/latest
      

      where the -s option specifies the location of the Solaris installation medium.

      This process can take a very long time.

      For a complete example of the luupgrade process, see Example 5–6.

    • Apply patches to the new boot environment.


      # luupgrade -t -n zfs2BE -t -s /patchdir patch-id-02 patch-id-04
      
  5. Activate the new boot environment.


    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    zfsBE                      yes      yes    yes       no     -         
    zfs2BE                     yes      no     no        yes    -    
    # luactivate zfs2BE
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfs2BE>.
    .
    .
    .
  6. Boot from the newly activated boot environment.


    # init 6
    

Example 5–6 Upgrading a ZFS Root File System With a Zone Root to a Oracle Solaris 10 9/10 ZFS Root File System

In this example, a ZFS BE (zfsBE), which was created on a Solaris 10 10/09 system with a ZFS root file system and zone root in a non root pool, is upgraded to the Oracle Solaris 10 9/10 release. This process can take a long time. Then, the upgraded BE (zfs2BE) is activated. Ensure that the zones are installed and booted before attempting the upgrade.

In this example, the zonepool pool, the /zonepool/zones dataset, and the zfszone zone are created as follows:


# zpool create zonepool mirror c2t1d0 c2t5d0
# zfs create zonepool/zones
# chmod 700 zonepool/zones
# zonecfg -z zfszone
zfszone: No such zone configured
Use 'create' to begin configuring a new zone.
zonecfg:zfszone> create
zonecfg:zfszone> set zonepath=/zonepool/zones
zonecfg:zfszone> verify
zonecfg:zfszone> exit
# zoneadm -z zfszone install
cannot create ZFS dataset zonepool/zones: dataset already exists
Preparing to install zone <zfszone>.
Creating list of files to copy from the global zone.
Copying <8960> files to the zone.
.
.
.

# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   2 zfszone          running    /zonepool/zones                native   shared

# lucreate -n zfsBE
.
.
.
# luupgrade -u -n zfsBE -s /net/install/export/s10up/latest
40410 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </net/system/export/s10up/latest/Solaris_10/Tools/Boot>
Validating the contents of the media </net/system/export/s10up/latest>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <10>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <zfsBE>.
Determining packages to install or upgrade for BE <zfsBE>.
Performing the operating system upgrade of the BE <zfsBE>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Updating package information on boot environment <zfsBE>.
Package information successfully updated on boot environment <zfsBE>.
Adding operating system patches to the BE <zfsBE>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot 
environment <zfsBE> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot 
environment <zfsBE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment <zfsBE>. Before you activate boot 
environment <zfsBE>, determine if any additional system maintenance is 
required or if additional media of the software distribution must be 
installed.
The Solaris upgrade of the boot environment <zfsBE> is complete.
Installing failsafe
Failsafe install is complete.
# luactivate zfsBE
# init 6
# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -         
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - zfszone          installed  /zonepool/zones                native   shared

ProcedureHow to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

Use this procedure to migrate a system with a UFS root file system and a zone root to at least the Solaris 10 5/09 release. Then, use Oracle Solaris Live Upgrade to create a ZFS BE.

In the steps that follow, the example UFS BE name is c0t1d0s0, the UFS zone root is zonepool/zfszone, and the ZFS root BE is zfsBE.

  1. Upgrade the system to at least the Solaris 10 5/09 release if it is running a previous Solaris 10 release.

    For information about upgrading a system that is running the Solaris 10 release, see Oracle Solaris 10 9/10 Installation Guide: Solaris Live Upgrade and Upgrade Planning.

  2. Create the root pool.

    For information about the root pool requirements, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.

  3. Confirm that the zones from the UFS environment are booted.


    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       2 zfszone          running    /zonepool/zones                native   shared
  4. Create the new ZFS boot environment.


    # lucreate -c c1t1d0s0 -n zfsBE -p rpool
    

    This command establishes datasets in the root pool for the new boot environment and copies the current boot environment (including the zones) to those datasets.

  5. Activate the new ZFS boot environment.


    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    c1t1d0s0                   yes      no     no        yes    -         
    zfsBE                      yes      yes    yes       no     -         #
    luactivate zfsBE       
    A Live Upgrade Sync operation will be performed on startup of boot environment <zfsBE>.
    .
    .
    .
  6. Reboot the system.


    # init 6
    
  7. Confirm that the ZFS file systems and zones are created in the new BE.


    # zfs list
    NAME                                USED  AVAIL  REFER  MOUNTPOINT
    rpool                              6.17G  60.8G    98K  /rpool
    rpool/ROOT                         4.67G  60.8G    21K  /rpool/ROOT
    rpool/ROOT/zfsBE                   4.67G  60.8G  4.67G  /
    rpool/dump                         1.00G  60.8G  1.00G  -
    rpool/swap                          517M  61.3G    16K  -
    zonepool                            634M  7.62G    24K  /zonepool
    zonepool/zones                      270K  7.62G   633M  /zonepool/zones
    zonepool/zones-c1t1d0s0             634M  7.62G   633M  /zonepool/zones-c1t1d0s0
    zonepool/zones-c1t1d0s0@zfsBE       262K      -   633M  -
    # zoneadm list -cv
      ID NAME             STATUS     PATH                           BRAND    IP    
       0 global           running    /                              native   shared
       - zfszone          installed  /zonepool/zones                native   shared

Example 5–7 Migrating a UFS Root File System With a Zone Root to a ZFS Root File System

In this example, a Oracle Solaris 10 9/10 system with a UFS root file system and a zone root (/uzone/ufszone), as well as a ZFS non-root pool (pool) and a zone root (/pool/zfszone), is migrated to a ZFS root file system. Ensure that the ZFS root pool is created and that the zones are installed and booted before attempting the migration.


# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   2 ufszone          running    /uzone/ufszone                 native   shared
   3 zfszone          running    /pool/zones/zfszone            native   shared

# lucreate -c ufsBE -n zfsBE -p rpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <zfsBE>.
Creating initial configuration for primary boot environment <zfsBE>.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
Comparing source boot environment <ufsBE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsBE>.
Source boot environment is <ufsBE>.
Creating boot environment <zfsBE>.
Creating file systems on boot environment <zfsBE>.
Creating <zfs> file system for </> in zone <global> on <rpool/ROOT/zfsBE>.
Populating file systems on boot environment <zfsBE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Copying root of zone <ufszone> to </.alt.tmp.b-EYd.mnt/uzone/ufszone>.
Creating snapshot for <pool/zones/zfszone> on <pool/zones/zfszone@zfsBE>.
Creating clone for <pool/zones/zfszone@zfsBE> on <pool/zones/zfszone-zfsBE>.
Creating compare databases for boot environment <zfsBE>.
Creating compare database for file system </rpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsBE>.
Making boot environment <zfsBE> bootable.
Creating boot_archive for /.alt.tmp.b-DLd.mnt
updating /.alt.tmp.b-DLd.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsBE> successful.
Creation of boot environment <zfsBE> successful.
# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsBE                      yes      yes    yes       no     -         
zfsBE                      yes      no     no        yes    -         
# luactivate zfsBE    
.
.
.
# init 6
.
.
.
# zfs list
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
pool                                    628M  66.3G    19K  /pool
pool/zones                              628M  66.3G    20K  /pool/zones
pool/zones/zfszone                     75.5K  66.3G   627M  /pool/zones/zfszone
pool/zones/zfszone-ufsBE                628M  66.3G   627M  /pool/zones/zfszone-ufsBE
pool/zones/zfszone-ufsBE@zfsBE           98K      -   627M  -
rpool                                  7.76G  59.2G    95K  /rpool
rpool/ROOT                             5.25G  59.2G    18K  /rpool/ROOT
rpool/ROOT/zfsBE                       5.25G  59.2G  5.25G  /
rpool/dump                             2.00G  59.2G  2.00G  -
rpool/swap                              517M  59.7G    16K  -
# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              native   shared
   - ufszone          installed  /uzone/ufszone                 native   shared
   - zfszone          installed  /pool/zones/zfszone            native   shared

ZFS Support for Swap and Dump Devices

During an initial Solaris OS installation or after performing an Oracle Solaris Live Upgrade migration from a UFS file system, a swap area is created on a ZFS volume in the ZFS root pool. For example:


# swap -l
swapfile                  dev  swaplo  blocks   free
/dev/zvol/dsk/rpool/swap 256,1      16 4194288 4194288

During an initial Solaris OS installation or a Oracle Solaris Live Upgrade from a UFS file system, a dump device is created on a ZFS volume in the ZFS root pool. In general, a dump device requires no administration because it is setup automatically at installation time. For example:


# dumpadm
      Dump content: kernel pages
       Dump device: /dev/zvol/dsk/rpool/dump (dedicated)
Savecore directory: /var/crash/t2000
  Savecore enabled: yes
   Save compressed: on

If you disable and remove the dump device, then you will need to enable it with the dumpadm command after it is recreated. In most cases, you will only have to adjust the size of the dump device by using the zfs command.

For information about the swap and dump volume sizes that are created by the installation programs, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.

Both the swap volume size and the dump volume size can be adjusted during and after installation. For more information, see Adjusting the Sizes of Your ZFS Swap Device and Dump Device.

Consider the following issues when working with your ZFS swap and dump devices:

See the following sections for more information:

Adjusting the Sizes of Your ZFS Swap Device and Dump Device

Because of the differences in the way a ZFS root installation determines the size of swap and dump devices, you might need to adjust their size before, during, or after installation.

Troubleshooting ZFS Dump Device Issues

Review the following items if you have problems either capturing a system crash dump or resizing the dump device.

Booting From a ZFS Root File System

Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When a system is booted from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.

When a system is booted for installation, a RAM disk is used for the root file system during the entire installation process.

Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, the boot device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.

By default, the dataset selected for booting is identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset in the boot -Z command.

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. For more information see:

Review the following known issues regarding mirrored ZFS root pools:

SPARC: Booting From a ZFS Root File System

On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.

During the Solaris OS installation and Oracle Solaris Live Upgrade process, the ZFS root file system is automatically designated with the bootfs property.

Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.

When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:


title zfsBE
bootfs rpool/ROOT/zfsBE
title zfs2BE
bootfs rpool/ROOT/zfs2BE

When a new BE is created, the menu.lst file is updated automatically.

On a SPARC based system, two new boot options are available:


Example 5–8 SPARC: Booting From a Specific ZFS Boot Environment

If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.

For example, the following ZFS BEs are available as described by the lustatus output:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsBE                      yes      no     no        yes    -         
zfs2BE                     yes      yes    yes       no     -

If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command to boot from a BE that is different from the default BE. However, a BE that is booted from a boot -L session is not reset as the default BE nor is the bootfs property updated. If you want to make the BE booted from a boot -L session the default BE, then you must activate it with the luactivate command.

For example:


ok boot -L
Rebooting with command: boot -L
Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0  File and args: -L

1 zfsBE
2 zfs2BE
Select environment to boot: [ 1 - 2 ]: 1
To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/zfsBE

Program terminated
ok boot -Z rpool/ROOT/zfsBE


Example 5–9 SPARC: Booting a ZFS File System in Failsafe Mode

On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows:


ok boot -F failsafe

To boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:


ok boot -Z rpool/ROOT/zfsBE -F failsafe

x86: Booting From a ZFS Root File System

The following entries are added to the /pool-name/boot/grub/menu.lst file during the Solaris OS installation process or Oracle Solaris Live Upgrade operation to boot ZFS automatically:


title Solaris 10 9/10  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.

On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added:


-B $ZFS-BOOTFS

Example 5–10 x86: Booting a ZFS File System

When a system boots from a ZFS file system, the root device is specified by the boot -B $ZFS-BOOTFS parameter on either the kernel or module line in the GRUB menu entry. This parameter value, similar to all parameters specified by the -B option, is passed by GRUB to the kernel. For example:



title Solaris 10 9/10  X86
findroot (rootfs0,0,a)
kernel$ /platform/i86pc/multiboot -B $ZFS-BOOTFS
module /platform/i86pc/boot_archive
title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Example 5–11 x86: Booting a ZFS File System in Failsafe Mode

The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:


title Solaris failsafe
findroot (rootfs0,0,a)
kernel /boot/multiboot kernel/unix -s -B console=ttya
module /boot/x86.miniroot-safe

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

The best way to change the active boot environment is to use the luactivate command. If booting the active environment fails due to a bad patch or a configuration error, the only way to boot from a different environment is to select that environment at boot time. You can select an alternate BE from the GRUB menu on an x86 based system or by booting it explicitly from the PROM on a SPARC based system.

Due to a bug in Oracle Solaris Live Upgrade in the Solaris 10 10/08 release, the inactive boot environment might fail to boot because a ZFS dataset or a zone's ZFS dataset in the boot environment has an invalid mount point. The same bug also prevents the BE from mounting if it has a separate /var dataset.

If a zone dataset has an invalid mount point, the mount point can be corrected by performing the following steps.

ProcedureHow to Resolve ZFS Mount-Point Problems

  1. Boot the system from a failsafe archive.

  2. Import the pool.

    For example:


    # zpool import rpool
    
  3. Look for incorrect temporary mount points.

    For example:


    # zfs list -r -o name,mountpoint rpool/ROOT/s10u6
        
        NAME                               MOUNTPOINT
        rpool/ROOT/s10u6                   /.alt.tmp.b-VP.mnt/
        rpool/ROOT/s10u6/zones             /.alt.tmp.b-VP.mnt//zones
        rpool/ROOT/s10u6/zones/zonerootA   /.alt.tmp.b-VP.mnt/zones/zonerootA

    The mount point for the root BE (rpool/ROOT/s10u6) should be /.

    If the boot is failing because of /var mounting problems, look for a similar incorrect temporary mount point for the /var dataset.

  4. Reset the mount points for the ZFS BE and its datasets.

    For example:


    # zfs inherit -r mountpoint rpool/ROOT/s10u6
    # zfs set mountpoint=/ rpool/ROOT/s10u6
    
  5. Reboot the system.

    When the option to boot a specific boot environment is presented, either in the GRUB menu or at the OpenBoot PROM prompt, select the boot environment whose mount points were just corrected.

Booting For Recovery Purposes in a ZFS Root Environment

Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.

You will need to boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password.

If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.

ProcedureHow to Boot ZFS Failsafe Mode

  1. Boot failsafe mode.

    On a SPARC system:


    ok boot -F failsafe
    

    On an x86 system, select failsafe mode from the GRUB prompt.

  2. Mount the ZFS BE on /a when prompted:


    .
    .
    .
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    Starting shell.
  3. Change to the /a/etc directory.


    # cd /a/etc
    
  4. If necessary, set the TERM type.


    # TERM=vt100
    # export TERM
  5. Correct the passwd or shadow file.


    # vi shadow
    
  6. Reboot the system.


    # init 6
    

ProcedureHow to Boot ZFS From Alternate Media

If a problem prevents the system from booting successfully or some other severe problem occurs, you will need to boot from a network install server or from a Solaris installation CD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.

  1. Boot from an installation CD or from the network.

    • SPARC:


      ok boot cdrom -s 
      ok boot net -s
      

      If you don't use the -s option, you will need to exit the installation program.

    • x86: Select the network boot or boot from local CD option.

  2. Import the root pool and specify an alternate mount point. For example:


    # zpool import -R /a rpool
    
  3. Mount the ZFS BE. For example:


    # zfs mount rpool/ROOT/zfsBE
    
  4. Access the ZFS BE contents from the /a directory.


    # cd /a
    
  5. Reboot the system.


    # init 6
    

Recovering the ZFS Root Pool or Root Pool Snapshots

The following sections describe how to perform the following tasks:

ProcedureHow to Replace a Disk in the ZFS Root Pool

You might need to replace a disk in the root pool for the following reasons:

In a mirrored root pool configuration, you can attempt a disk replacement without booting from alternate media. You can replace a failed disk by using the zpool replace command. Or, if you have an additional disk, you can use the zpool attach command. See the procedure in this section for an example of attaching an additional disk and detaching a root pool disk.

Some hardware requires that you take a disk offline and unconfigure it before attempting the zpool replace operation to replace a failed disk. For example:


# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
<Physically remove failed disk c1t0d0>
<Physically insert replacement disk c1t0d0>
# cfgadm -c configure c1::dsk/c1t0d0
# zpool replace rpool c1t0d0s0
# zpool online rpool c1t0d0s0
# zpool status rpool
<Let disk resilver before installing the boot blocks>
SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0

On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

You must identify the boot device pathnames of the current disk and the new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if the replacement disk fails. In the example in the following procedure, the path name for current root pool disk (c1t10d0s0) is:


/pci@8,700000/pci@3/scsi@5/sd@a,0

The path name for the replacement boot disk (c1t9d0s0) is:


/pci@8,700000/pci@3/scsi@5/sd@9,0
  1. Physically connect the replacement (or new) disk.

  2. Confirm that the new disk has an SMI label and a slice 0.

    For information about relabeling a disk that is intended for the root pool, see the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  3. Attach the new disk to the root pool.

    For example:


    # zpool attach rpool c1t10d0s0 c1t9d0s0
    
  4. Confirm the root pool status.

    For example:


    # zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress, 25.47% done, 0h4m to go
    config:
    
            NAME           STATE     READ WRITE CKSUM
            rpool          ONLINE       0     0     0
              mirror-0     ONLINE       0     0     0
                c1t10d0s0  ONLINE       0     0     0
                c1t9d0s0   ONLINE       0     0     0
    
    errors: No known data errors
  5. After the resilvering is completed, apply the boot blocks to the new disk.

    Using syntax similar to the following:

    • SPARC:


      # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0
      
    • x86:


      # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
      
  6. Verify that you can boot from the new disk.

    For example, on a SPARC based system, you would use syntax similar to the following:


    ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0
    
  7. If the system boots from the new disk, detach the old disk.

    For example:


    # zpool detach rpool c1t10d0s0
    
  8. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM, or reconfigure the PC BIOS.

ProcedureHow to Create Root Pool Snapshots

You can create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to perform a recursive snapshot of the root pool.

The following procedure creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. If a root pool fails, the remote dataset can be mounted by using NFS and the snapshot file can be received into the recreated pool. You can instead store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.

For information about remotely storing and recovering root pool snapshots, for the most up-to-date information about root pool recovery, go to this site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery. With either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.

In the following procedure, the system is booted from the zfsBE boot environment.

  1. Create a pool and file system on a remote system to store the snapshots.

    For example:


    remote# zfs create rpool/snaps
    
  2. Share the file system with the local system.

    For example:


    remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps
    # share
    -@rpool/snaps   /rpool/snaps   sec=sys,rw=local-system,root=local-system   "" 
  3. Create a recursive snapshot of the root pool.


    local# zfs snapshot -r rpool@0804
    local# zfs list
    NAME                        USED  AVAIL  REFER  MOUNTPOINT
    rpool                      6.17G  60.8G    98K  /rpool
    rpool@0804                     0      -    98K  -
    rpool/ROOT                 4.67G  60.8G    21K  /rpool/ROOT
    rpool/ROOT@0804                0      -    21K  -
    rpool/ROOT/zfsBE           4.67G  60.8G  4.67G  /
    rpool/ROOT/zfsBE@0804       386K      -  4.67G  -
    rpool/dump                 1.00G  60.8G  1.00G  -
    rpool/dump@0804                0      -  1.00G  -
    rpool/swap                  517M  61.3G    16K  -
    rpool/swap@0804                0      -    16K  -
  4. Send the root pool snapshots to the remote system.

    For example:


    local# zfs send -Rv rpool@0804 > /net/remote-system/rpool/snaps/rpool.0804
    sending from @ to rpool@0804
    sending from @ to rpool/swap@0804
    sending from @ to rpool/ROOT@0804
    sending from @ to rpool/ROOT/zfsBE@0804
    sending from @ to rpool/dump@0804

ProcedureHow to Recreate a ZFS Root Pool and Restore Root Pool Snapshots

In this procedure, assume the following conditions:

All the steps are performed on the local system.

  1. Boot from a CD/DVD or the network.

    • SPARC: Select one of the following boot methods:


      ok boot net -s
      ok boot cdrom -s
      

      If you don't use -s option, you'll need to exit the installation program.

    • x86: Select the option for booting from the DVD or the network. Then, exit the installation program.

  2. Mount the remote snapshot dataset.

    For example:


    # mount -F nfs remote-system:/rpool/snaps /mnt
    

    If your network services are not configured, you might need to specify the remote-system's IP address.

  3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you must relabel the disk.

    For more information about relabeling the disk, go to the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  4. Recreate the root pool.

    For example:


    # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=
    /etc/zfs/zpool.cache rpool c1t1d0s0
    
  5. Restore the root pool snapshots.

    This step might take some time. For example:


    # cat /mnt/rpool.0804 | zfs receive -Fdu rpool
    

    Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.

  6. Verify that the root pool datasets are restored.

    For example:


    # zfs list
    NAME                        USED  AVAIL  REFER  MOUNTPOINT
    rpool                      6.17G  60.8G    98K  /a/rpool
    rpool@0804                     0      -    98K  -
    rpool/ROOT                 4.67G  60.8G    21K  /legacy
    rpool/ROOT@0804                0      -    21K  -
    rpool/ROOT/zfsBE           4.67G  60.8G  4.67G  /a
    rpool/ROOT/zfsBE@0804       398K      -  4.67G  -
    rpool/dump                 1.00G  60.8G  1.00G  -
    rpool/dump@0804                0      -  1.00G  -
    rpool/swap                  517M  61.3G    16K  -
    rpool/swap@0804                0      -    16K  -
  7. Set the bootfs property on the root pool BE.

    For example:


    # zpool set bootfs=rpool/ROOT/zfsBE rpool
    
  8. Install the boot blocks on the new disk.

    SPARC:


    # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
    

    x86:


    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
    
  9. Reboot the system.


    # init 6
    

ProcedureHow to Roll Back Root Pool Snapshots From a Failsafe Boot

This procedure assumes that existing root pool snapshots are available. In the example, they are available on the local system.


# zfs snapshot -r rpool@0804
# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
rpool                      6.17G  60.8G    98K  /rpool
rpool@0804                     0      -    98K  -
rpool/ROOT                 4.67G  60.8G    21K  /rpool/ROOT
rpool/ROOT@0804                0      -    21K  -
rpool/ROOT/zfsBE           4.67G  60.8G  4.67G  /
rpool/ROOT/zfsBE@0804       398K      -  4.67G  -
rpool/dump                 1.00G  60.8G  1.00G  -
rpool/dump@0804                0      -  1.00G  -
rpool/swap                  517M  61.3G    16K  -
rpool/swap@0804                0      -    16K  -
  1. Shut down the system and boot failsafe mode.


    ok boot -F failsafe
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    
    Starting shell.
  2. Roll back each root pool snapshot.


    # zfs rollback rpool@0804
    # zfs rollback rpool/ROOT@0804
    # zfs rollback rpool/ROOT/zfsBE@0804
    
  3. Reboot to multiuser mode.


    # init 6