Solaris ZFS Administration Guide

Chapter 5 Installing and Booting a ZFS Root File System

This chapter describes how to install and boot a ZFS file system. Migrating a UFS root file system to a ZFS file system by using Solaris Live Upgrade is also covered.

The following sections are provided in this chapter:

For up-to-date troubleshooting information, go to the following site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Installing and Booting a ZFS Root File System (Overview)

Starting in the SXCE, build 90 release, you can install and boot from a ZFS root file system in the following ways:

After a SPARC-based or an x86 based system is installed with a ZFS root file system or migrated to a ZFS root file system, the system boots automatically from the ZFS root file system. For more information about boot changes, see Booting From a ZFS Root File System.

ZFS Installation Features

The following ZFS installation features are provided in this Solaris release:

The following installation features are not provided in this release:

Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support

Make sure the following requirements are met before attempting to install a system with a ZFS root file system or attempting to migrate a UFS root file system to a ZFS root file system.

Solaris Release Requirements

You can install and boot a ZFS root file system or migrate to a ZFS root file system in the following ways:

General ZFS Storage Pool Requirements

Review the following sections that describe ZFS root pool space and configuration requirements.

ZFS Storage Pool Space Requirements

The required minimum amount of available pool space for a ZFS root file system is larger than for a UFS root file system because swap and dump devices must be separate devices in a ZFS root environment. By default, swap and dump devices are the same device in a UFS root file system.

When a system is installed or upgraded with a ZFS root file system, the size of the swap area and the dump device are dependent upon the amount of physical memory. The minimum amount of available pool space for a bootable ZFS root file system depends upon the amount of physical memory, the disk space available, and the number of boot environments (BEs) to be created.

Review the following ZFS storage pool space requirements:

ZFS Storage Pool Configuration Requirements

Review the following ZFS storage pool configuration requirements:

Installing a ZFS Root File System (Initial Installation)

In this Solaris release, you can perform an initial installation by using the Solaris interactive text installer to create a ZFS storage pool that contains a bootable ZFS root file system. If you have an existing ZFS storage pool that you want to use for your ZFS root file system, then you must use Solaris Live Upgrade to migrate your existing UFS root file system to a ZFS root file system in an existing ZFS storage pool. For more information, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).

If you already have ZFS storage pools on the system, they are acknowledged by the following message, but remain untouched, unless you select the disks in the existing pools to create the new storage pool.


There are existing ZFS pools available on this system.  However, they can only be upgraded 
using the Live Upgrade tools.  The following screens will only allow you to install a ZFS root system, 
not upgrade one.

Caution – Caution –

Existing pools will be destroyed if any of their disks are selected for the new pool.


Before you begin the initial installation to create a ZFS storage pool, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.


Example 5–1 Initial Installation of a Bootable ZFS Root File System

The Solaris interactive text installation process is basically the same as previous Solaris releases, except that you are prompted to create a UFS or ZFS root file system. UFS is the still the default file system in this release. If you select a ZFS root file system, you will be prompted to create a ZFS storage pool. Installing a ZFS root file system involves the following steps:

  1. Select the Solaris interactive installation method because a Solaris Flash installation is not available to create a bootable ZFS root file system.

    You can perform a standard upgrade to upgrade an existing bootable ZFS file system that is running the SXCE, build 90 release, but you cannot use this option to create a new bootable ZFS file system. Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system as long as the SXCE, build 90 release is already installed. For more information about migrating to a ZFS root file system, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).

  2. If you want to create a ZFS root file system, select the ZFS option. For example:


    Choose Filesystem Type
    
      Select the filesystem to use for your Solaris installation
    
    
                [ ] UFS
                [X] ZFS
  3. After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous Solaris releases:


    Select Disks
    
      On this screen you must select the disks for installing Solaris software.
      Start by looking at the Suggested Minimum field; this value is the
      approximate space needed to install the software you've selected. For ZFS,
      multiple disks will be configured as mirrors, so the disk you choose, or the
      slice within the disk must exceed the Suggested Minimum value.
      NOTE: ** denotes current boot disk
    
      Disk Device                                              Available Space
    =============================================================================
      [X] ** c1t1d0                                           69994 MB
      [ ]    c1t2d0                                           69994 MB  (F4 to edit)
    
                                      Maximum Root Size:  69994 MB
                                      Suggested Minimum:   7466 MB

    You can select the disk or disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or three-disk mirrored pool is optimal. If you have eight disks and you select all eight disks, those eight disks are used for the root pool as one big mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported. For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.

  4. If you want to select two disks to create a mirrored root pool, then use the cursor control keys to select the second disk. For example, both c1t1d0 and c1t2d0 are selected for the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label nor contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.


    Select Disks
    
      On this screen you must select the disks for installing Solaris software.
      Start by looking at the Suggested Minimum field; this value is the
      approximate space needed to install the software you've selected. For ZFS,
      multiple disks will be configured as mirrors, so the disk you choose, or the
      slice within the disk must exceed the Suggested Minimum value.
      NOTE: ** denotes current boot disk
    
     Disk Device                                              Available Space
    =============================================================================
      [X] ** c1t1d0                                           69994 MB
      [X]    c1t2d0                                           69994 MB  (F4 to edit)
    
                                      Maximum Root Size:  69994 MB
                                      Suggested Minimum:   7466 MB

    If the Available Space column identifies 0 MB, this generally indicates that the disk has an EFI label.

  5. After you have selected a disk or disks for your ZFS storage pool, a screen that looks similar to the following is displayed:


    Configure ZFS Settings
    
      Specify the name of the pool to be created from the disk(s) you have chosen.
      Also specify the name of the dataset to be created within the pool that is
      to be used as the root directory for the filesystem.
    
    
                  ZFS Pool Name: rpool                                   
          ZFS Root Dataset Name: snv_109
          ZFS Pool Size (in MB): 69995
      Size of Swap Area (in MB): 2048
      Size of Dump Area (in MB): 1024
            (Pool size must be between 10076 MB and 69995 MB)
    
                             [X] Keep / and /var combined
                             [ ] Put /var on a separate dataset

    From this screen, you can change the name of the ZFS pool, dataset name, pool size, and swap and dump device sizes by moving the cursor control keys through the entries and replacing the default text value with new text. Or, you can accept the default values. In addition, you can modify the way the /var file system is created and mounted.

    In this example, the root dataset name is changed to zfsnv109BE.


     ZFS Pool Name: rpool                                   
          ZFS Root Dataset Name: zfsnv109BE
          ZFS Pool Size (in MB): 34731
             (Pool size must be between 6413 MB and 34731 MB)
  6. You can change the installation profile at this final installation screen. For example:


    Profile
    
      The information shown below is your profile for installing Solaris software.
      It reflects the choices you've made on previous screens.
    
      ============================================================================
    
                    Installation Option: Initial
                            Boot Device: c1t1d0
                  Root File System Type: ZFS
                        Client Services: None
    
                                Regions: North America
                          System Locale: C ( C )
    
                               Software: Solaris 11, Entire Distribution
                              Pool Name: rpool
                  Boot Environment Name: zfsnv109BE
                              Pool Size: 69995 MB
                        Devices in Pool: c1t1d0

After the installation is complete, review the resulting ZFS storage pool and file system information. For example:


# zpool status
  pool: rpool
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
         c1t1d0s0   ONLINE       0     0     0

errors: No known data errors
# zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool                  10.4G  56.5G    64K  /rpool
rpool/ROOT             7.43G  56.5G    18K  legacy
rpool/ROOT/zfsnv109BE  7.43G  56.5G  7.43G  /
rpool/dump             1.00G  56.5G  1.00G  -
rpool/export             41K  56.5G    21K  /export
rpool/export/home        20K  56.5G    20K  /export/home
rpool/swap                2G  58.5G  5.34M  -

The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.

If you initially created your ZFS storage pool with one disk, you can convert it to a mirrored ZFS configuration after the installation completes by using the zpool attach command to attach an available disk. For example:


# zpool attach rpool c1t1d0s0 c1t2d0s0
# zpool status
  pool: rpool
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scrub: resilver in progress for 0h0m, 5.03% done, 0h13m to go
config:

        NAME          STATE     READ WRITE CKSUM
        rpool         ONLINE       0     0     0
          mirror      ONLINE       0     0     0
            c1t1d0s0  ONLINE       0     0     0
            c1t2d0s0  ONLINE       0     0     0

errors: No known data errors

It will take some time to resilver the data to the new disk, but the pool is still available.

Until CR 6668666 is fixed, you will need to install the boot information on the additionally attached disks by using the installboot or installgrub commands if you want to enable booting on the other disks in the mirror. If you create a mirrored ZFS root pool with the initial installation method, then this step is unnecessary. For more information about installing boot information, see Booting From an Alternate Disk in a Mirrored ZFS Root Pool.

For more information about adding or attaching disks, see Managing Devices in ZFS Storage Pools.

If you want to create another ZFS boot environment (BE) in the same storage pool, you can use the lucreate command. In the following example, a new BE named zfsnv1092BE is created. The current BE is named zfsnv109BE, displayed in the zfs list output, is not acknowledged in the lustatus output until the new BE is created.


# lustatus
ERROR: No boot environments are configured on this system
ERROR: cannot determine list of all boot environment names

If you create a new ZFS BE in the same pool, use syntax similar to the following:


# lucreate -n zfsnv1092BE
Analyzing system configuration.
Comparing source boot environment <zfsnv109BE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfsnv1092BE>.
Source boot environment is <zfsnv109BE>.
Creating boot environment <zfsnv1092BE>.
Cloning file systems from boot environment <zfsnv109BE> to create boot environment <zfsnv1092BE>.
Creating snapshot for <rpool/ROOT/zfsnv109BE> on <rpool/ROOT/zfsnv109BE@zfsnv1092BE>.
Creating clone for <rpool/ROOT/zfsnv109BE@zfsnv1092BE> on <rpool/ROOT/zfsnv1092BE>.
Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfsnv1092BE>.
Population of boot environment <zfsnv1092BE> successful.
Creation of boot environment <zfsnv1092BE> successful.

Creating a ZFS BE within the same pool uses ZFS clone and snapshot features so the BE is created instantly. For more details about using Solaris Live Upgrade for a ZFS root migration, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).

Next, verify the new boot environments. For example:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsnv109BE                 yes      yes    yes       no     -         
zfsnv1092BE                yes      no     no        yes    -    
# zfs list 
NAME                    USED  AVAIL  REFER  MOUNTPOINT
rpool                   10.4G  56.5G    64K  /rpool
rpool/ROOT              7.42G  56.5G    18K  legacy
rpool/ROOT/zfsnv1092BE    97K  56.5G  7.42G  /tmp/.alt.luupdall.3244
rpool/ROOT/zfsnv109BE   7.42G  56.5G  7.42G  /
rpool/dump              1.00G  56.5G  1.00G  -
rpool/export              41K  56.5G    21K  /export
rpool/export/home         20K  56.5G    20K  /export/home
rpool/swap                 2G  58.5G  5.34M  -

If you want to boot from an alternate BE, use the luactivate command. After you activate the BE on a SPARC-based system, use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool. When booting from an x86 based system, identify the BE to be booted from the GRUB menu.

For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfsnv1092BE, select option 2. Then, type the displayed boot -Z command.


ok boot -L
Rebooting with command: boot -L                                       
Boot device: /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a  File and args: -L
1 zfsnv109BE
2 zfsnv1092BE
Select environment to boot: [ 1 - 2 ]: 2

To boot the selected entry, invoke:
boot [<root-device>] -Z rpool/ROOT/zfsnv1092BE

Program terminated
ok boot -Z rpool/ROOT/zfsnv1092BE

For more information about booting a ZFS file system, see Booting From a ZFS Root File System.


Installing a ZFS Root File System (JumpStart Installation)

You can create a JumpStart profile to install a ZFS root file system or a UFS root file system.

A ZFS specific profile must contain the new pool keyword. The pool keyword installs a new root pool and a new boot environment is created by default. You can provide the name of the boot environment and can create a separate /var dataset with the bootenv installbe keywords and bename and dataset options.

For general information about using JumpStart features, see Solaris Express Installation Guide: Custom JumpStart and Advanced Installations.

ZFS JumpStart Profile Examples

This section provides examples of ZFS specific JumpStart profiles.

The following profile performs an initial installation specified with install_type initial_install in a new pool, identified with pool newpool, whose size is automatically sized with the auto keyword to the size of the specified disks. The swap area and dump device are automatically sized with auto keyword in a mirrored configuration of disks (with the mirror keyword and disks specified as c0t0d0s0 and c0t1d0s0). Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named sxce_xx is created.


install_type initial_install
pool newpool auto auto auto mirror c0t0d0s0 c0t1d0s0
bootenv installbe bename sxce-xx

The following profile performs an initial installation with keyword install_type initial_install of the SUNWCall metacluster in a new pool called newpool, that is 80 Gbytes in size. This pool is created with a 2-Gbyte swap volume and a 2-Gbyte dump volume, in a mirrored configuration of any two available devices that are large enough to create an 80-Gbyte pool. If two such devices aren't available, the installation fails. Boot environment characteristics are set with the bootenv keyword to install a new BE with the keyword installbe and a bename named sxce-xx is created.


install_type initial_install
cluster SUNWCall
pool newpool 80g 2g 2g mirror any any
bootenv installbe bename sxce-xx

JumpStart installation syntax supports the ability to preserve or create a UFS file system on a disk that also includes a ZFS root pool. This configuration is not recommended for production systems, but could be used for transition or migration needs on a small system, such as a laptop.

ZFS JumpStart Keywords

The following keywords are permitted in a ZFS specific profile:

auto

Specifies the size of the slices for the pool, swap volume, or dump volume automatically. The size of the disk is checked to verify that the minimum size can be accommodated. If the minimize size can be accommodated, the largest possible pool size is allocated, given the constraints, such as the size of the disks, preserved slices, and so on.

For example, if you specify c0t0d0s0, the slice is created as large as possible if you specify either the all or auto keywords. Or, you can specify a particular size for the slice or swap or dump volume.

The auto keyword works similarly to the all keyword when used with a ZFS root pool because pools don't have the concept of unused space.

bootenv

This keyword identifies the boot environment characteristics.

The bootenv keyword already exists, but new options are defined. Use the following bootenv keyword syntax to create a bootable ZFS root environment:

bootenv installbe bename BE-name [dataset mount-point]

installbe

Creates a new BE that is identified by the bename option and BE-name entry and installs it.

bename BE-name

Identifies the BE-name to install.

If bename is not used with the pool keyword, then a default BE is created.

dataset mount-point

Use the optional dataset keyword to identify a /var dataset that is separate from the root dataset. The mount-point value is currently limited to /var. For example, a bootenv syntax line for a separate /var dataset would be similar to the following:


bootenv installbe bename zfsroot dataset /var
pool

Defines the new root pool to be created. The following keyword syntax must be provided:


poolname poolsize swapsize dumpsize vdevlist
poolname

Identifies the name of the pool to be created. The pool is created with the specified pool size and with the specified physical devices (vdevs). The poolname option should not identify the name of an existing pool or the existing pool is overwritten.

poolsize

Specifies the size of the pool to be created. The value can be auto or existing. The auto value means allocate the largest possible pool size, given the constraints, such as size of the disks, preserved slices, and so on. The existing value means the boundaries of existing slices by that name are preserved and overwritten. The size is assumed to be in Mbytes, unless specified by g (Gbytes).

swapsize

Specifies the size of the swap volume to be created. The value can be auto, which means the default swap size is used, or size, to specify a size. The size is assumed to be in Mbytes, unless specified by g (Gbytes).

dumpsize

Specifies the size of the dump volume to be created. The value can be auto, which means the default swap size is used, or size, to specify a size. The size is assumed to be in Mbytes, unless specified by g (Gbytes).

vdevlist

Specifies one or more devices that are used to create the pool. The format of the vdevlist is the same as the format of the zpool create command. At this time, only mirrored configurations are supported when multiple devices are specified. Devices in the vdevlist must be slices for the root pool. The any string, means that the installation software selects a suitable device.

You can mirror as many disks as you like, but the size of the pool that is created is determined by the smallest of the specified disks. For more information about creating mirrored storage pools, see Mirrored Storage Pool Configuration.

ZFS JumpStart Issues

Consider the following issues before starting a JumpStart installation of a bootable ZFS root file system.

Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade)

Previous Solaris Live Upgrade features are available and if related to UFS components, they work as in previous Solaris releases.

The following features are available:

For detailed information about Solaris installation and Solaris Live Upgrade features, see the Solaris Express Installation Guide: Solaris Live Upgrade and Upgrade Planning.

The basic process for migrating a UFS root file system to a ZFS root file system is as follows:

For information about ZFS and Solaris Live Upgrade requirements, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.

Required Solaris Live Upgrade Patch Information

For the SXCE, build 90 release, the compression of the install images is now unzipped with the 7zip utility. If you want to install the appropriate patches rather than upgrading or reinstalling to build 90, you will need to apply the following patches for Solaris Live Upgrade to succeed with the SXCE, build 90 release:

This chapter doesn't cover Solaris 10 issues, but if you are attempting to use Solaris Live Upgrade from a Solaris 10 release to the Nevada, build 90 release, the Solaris 10 5/08 release has had the 7zip utility since build 5. The patches listed above are only necessary if you are running releases older than the Solaris 10 5/08 release.

If you want to Solaris Live Upgrade from a Solaris 10 system with zones installed, you also need to apply the following additional cpio patches:

If you want to use Solaris Live Upgrade from Nevada builds before build 79, you must install the SUNWp7zip package from the latest Nevada build.

ZFS Solaris Live Upgrade Migration Issues

Review the following list of issues before you use Solaris Live Upgrade to migrate your UFS root file system to a ZFS root file system:

Using Solaris Live Upgrade to Migrate to a ZFS Root File System


Example 5–2 Using Solaris Live Upgrade to Migrate a UFS Root File System to a ZFS Root File System

The following example shows how to create a BE of a ZFS root file system from a UFS root file system. The current BE, c1t1d0s0, containing a UFS root file system, is identified by the -c option. The new BE, zfsnv109BE, is identified by the -n option. A ZFS storage pool must exist before the lucreate operation. The ZFS storage pool must be created with slices rather than whole disks to be upgradeable and bootable.


# zpool create mpool mirror c1t0d0s0 c1t2d0s0
# lucreate -c ufsnv109BE -n zfsnv109BE -p mpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named <ufsnv109BE>.
Creating initial configuration for primary boot environment <ufsnv109BE>.
The device </dev/dsk/c1t1d0s0> is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name <ufsnv109BE> PBE Boot Device </dev/dsk/c1t1d0s0>.
Comparing source boot environment <ufsnv109BE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment <zfsnv109BE>.
Source boot environment is <ufsnv109BE>.
Creating boot environment <zfsnv109BE>.
Creating file systems on boot environment <zfsnv109BE>.
Creating <zfs> file system for </> in zone <global> on <mpool/ROOT/zfsnv109BE>.
Populating file systems on boot environment <zfsnv109BE>.
Checking selection integrity.
Integrity check OK.
Populating contents of mount point </>.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment <zfsnv109BE>.
Creating compare database for file system </mpool/ROOT>.
Creating compare database for file system </>.
Updating compare databases on boot environment <zfsnv109BE>.
Making boot environment <zfsnv109BE> bootable.
Creating boot_archive for /.alt.tmp.b-0ob.mnt
updating /.alt.tmp.b-0ob.mnt/platform/sun4u/boot_archive
Population of boot environment <zfsnv109BE> successful.
Creation of boot environment <zfsnv109BE> successful.

After the lucreate operation completes, use the lustatus command to view the BE status. For example:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsnv109BE                 yes      yes    yes       no     -         
zfsnv109BE                 yes      no     no        yes    -    

Then, review the list of ZFS components. For example:


# zfs list
NAME                    USED  AVAIL  REFER  MOUNTPOINT
mpool                  9.95G  41.2G    21K  /mpool
mpool/ROOT             7.45G  41.2G    19K  /mpool/ROOT
mpool/ROOT/zfsnv109BE  7.45G  41.2G  7.45G  /tmp/.alt.luupdall.5232
mpool/dump                2G  43.2G    16K  -
mpool/swap              517M  41.7G    16K  -

Next, use the luactivate command to activate the new ZFS BE. For example:


# luactivate zfs1009BE
A Live Upgrade Sync operation will be performed on startup of boot environment <zfs1009BE>.

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************
.
.
.
Modifying boot archive service
Activation of boot environment <zfs1009BE> successful.

# luactivate zfsnv109BE

**********************************************************************

The target boot environment has been activated. It will be used when you 
reboot. NOTE: You MUST NOT USE the reboot, halt, or uadmin commands. You 
MUST USE either the init or the shutdown command when you reboot. If you 
do not use either init or shutdown, the system will not boot using the 
target BE.

**********************************************************************

In case of a failure while booting to the target BE, the following process 
needs to be followed to fallback to the currently working boot environment:

1. Enter the PROM monitor (ok prompt).

2. Change the boot device back to the original boot environment by typing:

     setenv boot-device /pci@1f,700000/scsi@2/disk@1,0:a

3. Boot to the original boot environment by typing:

     boot

**********************************************************************

Modifying boot archive service
Activation of boot environment <zfsnv109BE> successful.

Next, reboot the system to the ZFS BE.


# init 6

Confirm that the ZFS BE is active.


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
ufsnv109BE                 yes      no     no        yes    -         
zfsnv109BE                 yes      yes    yes       no     -    

If you switch back to the UFS BE, you will need to re-import any ZFS storage pools that were created while the ZFS BE was booted because they are not automatically available in the UFS BE.

If the UFS BE is no longer required, you can remove it with the ludelete command.



Example 5–3 Using Solaris Live Upgrade to Create a ZFS BE From a ZFS BE

Creating a ZFS BE from a ZFS BE in the same pool is very quick because this operation uses ZFS snapshot and clone features. If the current BE resides on the same ZFS pool mpool, for example, the -p option is omitted.

If you have multiple ZFS BEs on a SPARC based system, you can use the boot -L command to identify the available BEs and select a BE from which to boot by using the boot -Z command. On an x86 based system, you can select a BE from the GRUB menu. For more information, see Example 5–6.


# lucreate -n zfsnv1092BE
Analyzing system configuration.
Comparing source boot environment <zfsnv109BE> file systems with the file 
system(s) you specified for the new boot environment. Determining which 
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
Creating configuration for boot environment <zfsnv1092BE>.
Source boot environment is <zfsnv109BE>.
Creating boot environment <zfsnv1092BE>.
Cloning file systems from boot environment <zfsnv109BE> to create boot environment <zfsnv1092BE>.
Creating snapshot for <mpool/ROOT/zfsnv109BE> on <mpool/ROOT/zfsnv109BE@zfsnv1092BE>.
Creating clone for <mpool/ROOT/zfsnv109BE@zfsnv1092BE> on <mpool/ROOT/zfsnv1092BE>.
Setting canmount=noauto for </> in zone <global> on <mpool/ROOT/zfsnv1092BE>.
Population of boot environment <zfsnv1092BE> successful.
Creation of boot environment <zfsnv1092BE> successful.


Example 5–4 Upgrading Your ZFS BE (luupgrade)

You can upgrade your ZFS BE to a later build by using the luupgrade command. The following example shows how to upgrade a ZFS BE from build 109 to build 110.

The basic process is:


# luupgrade -n zfsnv109BE -u -s /net/install/export/nv/combined.nvs_wos/110
50687 blocks
miniroot filesystem is <lofs>
Mounting miniroot at </net/install/export/nv/combined.nvs_wos/110/Solaris_11/Tools/Boot>
Validating the contents of the media </net/install/export/nv/combined.nvs_wos/110>.
The media is a standard Solaris media.
The media contains an operating system upgrade image.
The media contains <Solaris> version <11>.
Constructing upgrade profile to use.
Locating the operating system upgrade program.
Checking for existence of previously scheduled Live Upgrade requests.
Creating upgrade profile for BE <zfsnv109BE>.
Determining packages to install or upgrade for BE <zfsnv109BE>.
Performing the operating system upgrade of the BE <zfsnv109BE>.
CAUTION: Interrupting this process may leave the boot environment unstable 
or unbootable.
Upgrading Solaris: 100% completed
Installation of the packages from this media is complete.
Adding operating system patches to the BE <zfsnv109BE>.
The operating system patch installation is complete.
INFORMATION: The file </var/sadm/system/logs/upgrade_log> on boot 
environment <zfsnv109BE> contains a log of the upgrade operation.
INFORMATION: The file </var/sadm/system/data/upgrade_cleanup> on boot 
environment <zfsnv109BE> contains a log of cleanup operations required.
INFORMATION: Review the files listed above. Remember that all of the files 
are located on boot environment <zfsnv109BE>. Before you activate boot 
environment <zfsnv109BE>, determine if any additional system maintenance 
is required or if additional media of the software distribution must be 
installed.
The Solaris upgrade of the boot environment <zfsnv109BE> is complete.

ZFS Support for Swap and Dump Devices

During an initial installation or a Solaris Live Upgrade from a UFS file system, a swap area is created on a ZFS volume in the ZFS root pool. For example:


# swap -l
swapfile                  dev    swaplo   blocks     free
/dev/zvol/dsk/mpool/swap 253,3        16  8257520  8257520

During an initial installation or a Solaris Live Upgrade from a UFS file system, a dump device is created on a ZFS volume in the ZFS root pool. In general, a dump device requires no administration because it is setup automatically at installation time. For example:


# dumpadm
      Dump content: kernel pages
       Dump device: /dev/zvol/dsk/mpool/dump (dedicated)
Savecore directory: /var/crash/t2000
  Savecore enabled: yes

If you disable and remove the dump device, then you will need to enable it with the dumpadm command after it is recreated. In most cases, you will only have to adjust the size of the dump device by using the zfs command.

For information about the swap and dump volume sizes that are created by the installation programs, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.

Both the swap volume size and the dump volume size can be adjusted during and after installation. For more information, see Adjusting the Sizes of Your ZFS Swap and Dump Devices.

Consider the following issues when working with ZFS swap and dump devices:

See the following sections for more information:

Adjusting the Sizes of Your ZFS Swap and Dump Devices

Because of the differences in the way a ZFS root installation sizes swap and dump devices, you might need to adjust the size of swap and dump devices before, during, or after installation.

Troubleshooting ZFS Dump Device Issues

Review the following items if you have problems either capturing a system crash dump or resizing the dump device.

Booting From a ZFS Root File System

Both SPARC based and x86 based systems use the new style of booting with a boot archive, which is a file system image that contains the files required for booting. When booting from a ZFS root file system, the path names of both the boot archive and the kernel file are resolved in the root file system that is selected for booting.

When the system is booted for installation, a RAM disk is used for the root file system during the entire installation process.

Booting from a ZFS file system differs from booting from a UFS file system because with ZFS, a device specifier identifies a storage pool, not a single root file system. A storage pool can contain multiple bootable datasets or ZFS root file systems. When booting from ZFS, you must specify a boot device and a root file system within the pool that was identified by the boot device.

By default, the dataset selected for booting is the one identified by the pool's bootfs property. This default selection can be overridden by specifying an alternate bootable dataset that is included in the boot -Z command.

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

You can create a mirrored ZFS root pool when the system is installed, or you can attach a disk to create a mirrored ZFS root pool after installation. Review the following known issues regarding mirrored ZFS root pools:

Booting From a ZFS Root File System on a SPARC Based System

On a SPARC based system with multiple ZFS BEs, you can boot from any BE by using the luactivate command.

During the installation and Solaris Live Upgrade process, the ZFS root file system is automatically designated with the bootfs property.

Multiple bootable datasets can exist within a pool. By default, the bootable dataset entry in the /pool-name/boot/menu.lst file is identified by the pool's bootfs property. However, a menu.lst entry can contain a bootfs command, which specifies an alternate dataset in the pool. In this way, the menu.lst file can contain entries for multiple root file systems within the pool.

When a system is installed with a ZFS root file system or migrated to a ZFS root file system, an entry similar to the following is added to the menu.lst file:


title zfsnv109BE
bootfs mpool/ROOT/zfsnv109BE

When a new BE is created, the menu.lst file is updated. Until CR 6696226 is fixed, you must update the menu.lst file manually after you activate the BE with the luactivate command.

When a new BE is created, the menu.lst file is updated automatically.

On a SPARC based system, two new boot options are available:


Example 5–5 Booting From a Specific ZFS Boot Environment

If you have multiple ZFS BEs in a ZFS storage pool on your system's boot device, you can use the luactivate command to specify a default BE.

For example, the following ZFS BEs are available as described by the lustatus output:


# lustatus
Boot Environment           Is       Active Active    Can    Copy      
Name                       Complete Now    On Reboot Delete Status    
-------------------------- -------- ------ --------- ------ ----------
zfsnv109BE                   yes      yes    yes       no     -         
zfsnv1092BE                  yes      no     no        yes    -      

If you have multiple ZFS BEs on your SPARC based system, you can use the boot -L command. For example:


ok boot -L
Rebooting with command: boot -L                                       
Boot device: /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a  File and args: -L
1 zfsnv109BE
2 zfsnv1092BE
Select environment to boot: [ 1 - 2 ]: 2

To boot the selected entry, invoke:
boot [<root-device>] -Z mpool/ROOT/zfsnv1092BE

Program terminated
ok boot -Z rpool/ROOT/zfsnv1092BE


Example 5–6 SPARC: Booting a ZFS File System in Failsafe Mode

On a SPARC based system, you can boot from the failsafe archive located in /platform/`uname -i`/failsafe as follows. For example:


ok boot -F failsafe

If you want to boot a failsafe archive from a particular ZFS bootable dataset, use syntax similar to the following:


ok boot -Z rpool/ROOT/zfsnv109BE -F failsafe

Booting From a ZFS Root File System on an x86 Based System

The following entries are added to the /pool-name/boot/grub/menu.lst file during the installation process or Solaris Live Upgrade operation to boot ZFS automatically:


title Solaris Express Community Edition zfsnv109BE X86
bootfs rpool/ROOT/zfsnv109BE
findroot (pool_rpool,0,a)
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive

If the device identified by GRUB as the boot device contains a ZFS storage pool, the menu.lst file is used to create the GRUB menu.

On an x86 based system with multiple ZFS BEs, you can select a BE from the GRUB menu. If the root file system corresponding to this menu entry is a ZFS dataset, the following option is added.


-B $ZFS-BOOTFS

Example 5–7 x86: Booting a ZFS File System

When booting from a ZFS file system, the root device is specified by the boot -B $ZFS-BOOTFS parameter on either the kernel or module line in the GRUB menu entry. This value, similar to all parameters specified by the -B option, is passed by GRUB to the kernel. For example:


title Solaris Express Community Edition zfsnv1095BE X86
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/zfsnv109BE
kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS
module$ /platform/i86pc/$ISADIR/boot_archive


Example 5–8 x86: Booting a ZFS File System in Failsafe Mode

The x86 failsafe archive is /boot/x86.miniroot-safe and can be booted by selecting the Solaris failsafe entry from the GRUB menu. For example:


title zfsnv109BE failsafe
findroot (pool_rpool,0,a)
bootfs rpool/ROOT/zfsnv109BE
kernel /boot/platform/i86pc/kernel/unix -s -B console=ttyb
module /boot/x86.miniroot-safe


Example 5–9 x86: Fast Rebooting a ZFS Root File System

In the Solaris Express Community Edition, build 100, the fast reboot feature provides the ability to reboot within seconds on x86 based systems. With the fast reboot feature, you can reboot to a new kernel without experiencing the long delays that can be imposed by the BIOS and boot loader. The ability to fast reboot a system drastically reduces down time and improves efficiency.

You must still use the init 6 command when transitioning between BEs with the luactivate command. For other system operations where the reboot command is appropriate, you can use the reboot f command. For example:


# reboot -f

Booting For Recovery Purposes in a ZFS Root Environment

Use the following procedure if you need to boot the system so that you can recover from a lost root password or similar problem.

You will need to boot failsafe mode or boot from alternate media, depending on the severity of the error. In general, you can boot failsafe mode to recover a lost or unknown root password. The OpenSolaris release does not support failsafe mode.

If you need to recover a root pool or root pool snapshot, see Recovering the ZFS Root Pool or Root Pool Snapshots.

ProcedureHow to Boot ZFS Failsafe Mode

  1. Boot failsafe mode.

    On a SPARC system:


    ok boot -F failsafe
    

    On an x86 system, select failsafe mode from the GRUB prompt.

  2. Mount the ZFS BE on /a when prompted:


    .
    .
    .
    ROOT/zfsBE was found on rpool.
    Do you wish to have it mounted read-write on /a? [y,n,?] y
    mounting rpool on /a
    Starting shell.
  3. Change to the /a/etc directory.


    # cd /a/etc
    
  4. If necessary, set the TERM type.


    # TERM=vt100
    # export TERM
  5. Correct the passwd or shadow file.


    # vi shadow
    
  6. Reboot the system.


    # init 6
    

ProcedureHow to Boot ZFS From Alternate Media

If a problem prevents the system from booting successfully or some other severe problem occurs, you will need to boot from a network install server or from a Solaris installation CD, import the root pool, mount the ZFS BE, and attempt to resolve the issue.

You can use this procedure on a system that is running the Open Solaris release to recover a lost root password or similar problem.

  1. Boot from an installation CD or from the network.

    On a SPARC system:


    ok boot cdrom -s 
    ok boot net -s 

    If you don't use the -s option, you will need to exit the installation program.

    On an x86 system, select the network boot or boot from local CD option.

  2. Import the root pool and specify an alternate mount point. For example:


    # zpool import -R /a rpool
    
  3. Mount the ZFS BE. For example:


    # zfs mount rpool/ROOT/zfsBE
    
  4. Access the ZFS BE contents from the /a directory.


    # cd /a
    
  5. Reboot the system.


    # init 6
    

Recovering the ZFS Root Pool or Root Pool Snapshots

The following sections describe how to perform the following tasks:

ProcedureHow to Replace a Disk in the ZFS Root Pool

You might need to replace a disk in the root pool for the following reasons:

In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.

Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:


# zpool offline rpool c1t0d0s0
# cfgadm -c unconfigure c1::dsk/c1t0d0
<Physically remove failed disk c1t0d0>
<Physically insert replacement disk c1t0d0>
# cfgadm -c configure c1::dsk/c1t0d0
# zpool replace rpool c1t0d0s0
# zpool online rpool c1t0d0s0
# zpool status rpool
<Let disk resilver before installing the boot blocks>
SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0

On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.

Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c1t10d0s0) is:


/pci@8,700000/pci@3/scsi@5/sd@a,0

In the example below, the replacement boot disk is (c1t9d0s0):


/pci@8,700000/pci@3/scsi@5/sd@9,0
  1. Physically connect the replacement disk.

  2. Confirm that the replacement (new) disk has an SMI label and a slice 0.

    For information about relabeling a disk that is intended for the root pool, see the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  3. Attach the new disk to the root pool.

    For example:


    # zpool attach rpool c1t10d0s0 c1t9d0s0
    
  4. Confirm the root pool status.

    For example:


    # zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress, 25.47% done, 0h4m to go
    config:
    
            NAME           STATE     READ WRITE CKSUM
            rpool          ONLINE       0     0     0
              mirror       ONLINE       0     0     0
                c1t10d0s0  ONLINE       0     0     0
                c1t9d0s0   ONLINE       0     0     0
    
    errors: No known data errors
  5. After the resilvering is complete, apply the boot blocks to the new disk.

    For example:

    On a SPARC based system:


    # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t9d0s0
    

    On an x86 based system:


    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
    
  6. Verify that you can boot from the new disk.

    For example, on a SPARC based system:


    ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0
    
  7. If the system boots from the new disk, detach the old disk.

    For example:


    # zpool detach rpool c1t10d0s0
    
  8. Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM, or reconfigure the PC BIOS.

Procedure How to Create Root Pool Snapshots

Create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to do a recursive snapshot of the root pool.

The procedure below creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. In the case of a root pool failure, the remote dataset can be mounted by using NFS and the snapshot file received into the recreated pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.

For information about remotely storing and recovering root pool snapshots and the most up-to-date information about root pool recovery, go to this site:

http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery and in either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.

In the following example, the system is booted from the zfs1009BE boot environment.

  1. Create space on a remote system to store the snapshots.

    For example:


    remote# zfs create rpool/snaps
    
  2. Share the space to the local system.

    For example:


    remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps
    # share
    -@rpool/snaps   /rpool/snaps   sec=sys,rw=local-system,root=local-system   "" 
  3. Create a recursive snapshot of the root pool.

    In this example, the system has two BEs, zfsnv109BE and zfsnv1092BE. The active BE is zfsnv109BE.


    local# zpool set listsnapshots=on mpool
    local# zfs snapshot -r mpool@0311
    local# zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    mpool                              9.98G  41.2G  22.5K  /mpool
    mpool@0311                             0      -  22.5K  -
    mpool/ROOT                         7.48G  41.2G    19K  /mpool/ROOT
    mpool/ROOT@0311                        0      -    19K  -
    mpool/ROOT/zfsnv1092BE               85K  41.2G  7.48G  /tmp/.alt.luupdall.2934
    mpool/ROOT/zfsnv1092BE@0311            0      -  7.48G  -
    mpool/ROOT/zfsnv109BE              7.48G  41.2G  7.45G  /
    mpool/ROOT/zfsnv109BE@zfsnv1092BE  28.7M      -  7.48G  -
    mpool/ROOT/zfsnv109BE@0311           58K      -  7.45G  -
    mpool/dump                         2.00G  41.2G  2.00G  -
    mpool/dump@0311                        0      -  2.00G  -
    mpool/swap                          517M  41.7G    16K  -
    mpool/swap@0311                        0      -    16K  -
  4. Send the root pool snapshots to the remote system.

    For example:


    local# zfs send -Rv mpool@0311 > /net/remote-system/rpool/snaps/mpool.0311
    sending from @ to mpool@0311
    sending from @ to mpool/swap@0311
    sending from @ to mpool/dump@0311
    sending from @ to mpool/ROOT@0311
    sending from @ to mpool/ROOT/zfsnv109BE@zfsnv1092BE
    sending from @zfsnv1092BE to mpool/ROOT/zfsnv109BE@0311
    sending from @ to mpool/ROOT/zfsnv1092BE@0311

Procedure How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots

In this scenario, assume the following conditions:

All steps below are performed on the local system.

  1. Boot from CD/DVD or the network.

    On a SPARC based system, select one of the following boot methods:


    ok boot net -s
    ok boot cdrom -s
    

    If you don't use -s option, you'll need to exit the installation program.

    On an x86 based system, select the option for booting from the DVD or the network. Then, exit the installation program.

  2. Mount the remote snapshot dataset.

    For example:


    # mount -F nfs remote-system:/rpool/snaps /mnt
    

    If your network services are not configured, you might need to specify the remote-system's IP address.

  3. If the root pool disk is replaced and does not contain a disk label that is usable by ZFS, you will have to relabel the disk.

    For more information about relabeling the disk, go to the following site:

    http://www.solarisinternals.com/wiki/index.php/ZFS_Troubleshooting_Guide

  4. Recreate the root pool.

    For example:


    # zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache mpool c1t0d0s0
    
  5. Restore the root pool snapshots.

    This step might take some time. For example:


    # cat /mnt/mpool.0311 | zfs receive -Fdu mpool
    

    Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.

  6. (Optional) If you want to modify something in the BE, you will need to explicitly mount them like this:

    1. Mount the BE components. For example:


      # zfs mount mpool/ROOT/zfsnv109BE
      
    2. Mount everything in the pool that is not part of a BE. For example:


      # zfs mount -a mpool
      

    Other BEs are not mounted since they have canmount=noauto, which suppresses mounting when the zfs mount -a operation is done.

  7. Verify that the root pool datasets are restored.

    For example:


    # zfs list
    NAME                               USED  AVAIL  REFER  MOUNTPOINT
    mpool                              9.98G  41.2G  22.5K  /mpool
    mpool@0311                             0      -  22.5K  -
    mpool/ROOT                         7.48G  41.2G    19K  /mpool/ROOT
    mpool/ROOT@0311                        0      -    19K  -
    mpool/ROOT/zfsnv1092BE               85K  41.2G  7.48G  /tmp/.alt.luupdall.2934
    mpool/ROOT/zfsnv1092BE@0311            0      -  7.48G  -
    mpool/ROOT/zfsnv109BE              7.48G  41.2G  7.45G  /
    mpool/ROOT/zfsnv109BE@zfsnv1092BE  28.7M      -  7.48G  -
    mpool/ROOT/zfsnv109BE@0311           58K      -  7.45G  -
    mpool/dump                         2.00G  41.2G  2.00G  -
    mpool/dump@0311                        0      -  2.00G  -
    mpool/swap                          517M  41.7G    16K  -
    mpool/swap@0311                        0      -    16K  -
  8. Set the bootfs property on the root pool BE.

    For example:


    # zpool set bootfs=mpool/ROOT/zfsnv109BE mpool
    
  9. Install the boot blocks on the new disk.

    On a SPARC based system:


    # installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t5d0s0
    

    On an x86 based system:


    # installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t5d0s0
    
  10. Reboot the system.


    # init 6
    

Procedure How to Roll Back Root Pool Snapshots From a Failsafe Boot

This procedure assumes that existing root pool snapshots are available. In this example, the root pool snapshots are available on the local system. For example:


# zpool set listsnapshots=on rpool
# zfs snapshot -r rpool/ROOT@0311
# zfs list
NAME                         USED  AVAIL  REFER  MOUNTPOINT
rpool                        5.67G  1.04G  21.5K  /rpool
rpool/ROOT                   4.66G  1.04G    18K  /rpool/ROOT
rpool/ROOT@1013                  0      -    18K  -
rpool/ROOT/zfsnv109BE        4.66G  1.04G  4.66G  /
rpool/ROOT/zfsnv109BE@0311       0      -  4.66G  -
rpool/dump                    515M  1.04G   515M  -
rpool/swap                    513M  1.54G    16K  -
  1. Shutdown the system and boot failsafe mode.


    ok boot -F failsafe
    Multiple OS instances were found. To check and mount one of them
    read-write under /a, select it from the following list. To not mount
    any, select 'q'.
    
     1  /dev/dsk/c1t1d0s0             Solaris Express Community Edition snv_109 SPARC
     2  rpool:7641827061132033134     ROOT/zfsnv1092BE
    
    Please select a device to be mounted (q for none) [?,??,q]: 2
    mounting rpool on /a
  2. Rollback the individual root pool snapshots.


    # zfs rollback -rf rpool/ROOT@0311
    
  3. Reboot back to multiuser mode.


    # init 6