JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris ZFS Administration Guide     Oracle Solaris 10 1/13 Information Library
search filter icon
search icon

Document Information

Preface

1.  Oracle Solaris ZFS File System (Introduction)

2.  Getting Started With Oracle Solaris ZFS

3.  Managing Oracle Solaris ZFS Storage Pools

4.  Installing and Booting an Oracle Solaris ZFS Root File System

Installing and Booting an Oracle Solaris ZFS Root File System (Overview)

ZFS Installation Features

Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support

Oracle Solaris Release Requirements

General ZFS Root Pool Requirements

Disk Space Requirements for ZFS Root Pools

ZFS Root Pool Configuration Requirements

Installing a ZFS Root File System (Oracle Solaris Initial Installation)

How to Create a Mirrored ZFS Root Pool (Postinstallation)

Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)

Installing a ZFS Root File System ( JumpStart Installation)

JumpStart Keywords for ZFS

JumpStart Profile Examples for ZFS

JumpStart Issues for ZFS

Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)

ZFS Migration Issues With Live Upgrade

Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)

Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)

How to Migrate a UFS Root File System With Zone Roots on UFS to a ZFS Root File System (Solaris 10 10/08)

How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)

Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09)

Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)

How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)

How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)

How to Migrate a UFS Root File System With a Zone Root to a ZFS Root File System (at Least Solaris 10 5/09)

Managing Your ZFS Swap and Dump Devices

Adjusting the Sizes of Your ZFS Swap Device and Dump Device

Customizing ZFS Swap and Dump Volumes

Troubleshooting ZFS Dump Device Issues

Booting From a ZFS Root File System

Booting From an Alternate Disk in a Mirrored ZFS Root Pool

SPARC: Booting From a ZFS Root File System

x86: Booting From a ZFS Root File System

Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)

How to Resolve ZFS Mount-Point Problems

Booting for Recovery Purposes in a ZFS Root Environment

How to Boot ZFS Failsafe Mode

How to Boot ZFS From Alternate Media

Recovering the ZFS Root Pool or Root Pool Snapshots

How to Replace a Disk in the ZFS Root Pool

How to Create Root Pool Snapshots

How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots

How to Roll Back Root Pool Snapshots From a Failsafe Boot

5.  Managing Oracle Solaris ZFS File Systems

6.  Working With Oracle Solaris ZFS Snapshots and Clones

7.  Using ACLs and Attributes to Protect Oracle Solaris ZFS Files

8.  Oracle Solaris ZFS Delegated Administration

9.  Oracle Solaris ZFS Advanced Topics

10.  Oracle Solaris ZFS Troubleshooting and Pool Recovery

11.  Recommended Oracle Solaris ZFS Practices

A.  Oracle Solaris ZFS Version Descriptions

Index

Installing a ZFS Root File System (Oracle Solaris Initial Installation)

In this Oracle Solaris release, you can perform an initial installation by using the following methods:

Before you begin the initial installation to create a ZFS storage pool, see Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support.

If you will be configuring zones after the initial installation of a ZFS root file system and you plan on patching or upgrading the system, see Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) or Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).

If you already have ZFS storage pools on the system, they are acknowledged by the following message. However, these pools remain untouched, unless you select the disks in the existing pools to create the new storage pool.

There are existing ZFS pools available on this system.  However, they can only be upgraded 
using the Live Upgrade tools.  The following screens will only allow you to install a ZFS root system, 
not upgrade one.

Caution

Caution - Existing pools will be destroyed if any of their disks are selected for the new pool.


Example 4-1 Initial Installation of a Bootable ZFS Root File System

The interactive text installation process is basically the same as in previous Oracle Solaris releases, except that you are prompted to create a UFS or a ZFS root file system. UFS is still the default file system in this release. If you select a ZFS root file system, you are prompted to create a ZFS storage pool. The steps for installing a ZFS root file system follow:

  1. Insert the Oracle Solaris installation media or boot the system from an installation server. Then, select the interactive text installation method to create a bootable ZFS root file system.

    • SPARC: Use the following syntax for the Oracle Solaris Installation DVD:

      ok boot cdrom - text
    • SPARC: Use the following syntax when booting from the network:

      ok boot net - text
    • x86: Select the text-mode installation method.

    You can also create a ZFS flash archive to be installed by using the following methods:

    • JumpStart installation. For more information, see Example 4-2.

    • Initial installation. For more information, see Example 4-3.

    You can perform a standard upgrade to upgrade an existing bootable ZFS file system, but you cannot use this option to create a new bootable ZFS file system. Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system, as long as at least the Solaris 10 10/08 release is already installed. For more information about migrating to a ZFS root file system, see Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade).

  2. To create a ZFS root file system, select the ZFS option. For example:

    Choose Filesystem Type
    
      Select the filesystem to use for your Solaris installation
    
    
                [ ] UFS
                [X] ZFS
  3. After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous releases.

    Select Disks
      On this screen you must select the disks for installing Solaris software.
      Start by looking at the Suggested Minimum field; this value is the
      approximate space needed to install the software you've selected. For ZFS,
      multiple disks will be configured as mirrors, so the disk you choose, or the
      slice within the disk must exceed the Suggested Minimum value.
      NOTE: ** denotes current boot disk
    
      Disk Device                                              Available Space
      =============================================================================
      [X] ** c1t0d0                                           139989 MB  (F4 to edit
    ) [ ]    c1t1d0                                           139989 MB
      [ ]    c1t2d0                                           139989 MB
      [ ]    c1t3d0                                           139989 MB
      [ ]    c2t0d0                                           139989 MB
      [ ]    c2t1d0                                           139989 MB
      [ ]    c2t2d0                                           139989 MB
      [ ]    c2t3d0                                           139989 MB
                                      Maximum Root Size: 139989 MB
                                      Suggested Minimum:  11102 MB

    You can select one or more disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or a three-disk mirrored pool is optimal. If you have eight disks and you select all of them, those eight disks are used for the root pool as one large mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported.

    For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.

  4. To select two disks to create a mirrored root pool, use the cursor control keys to select the second disk.

    In the following example, both c1t0d0 and c1t1d0 are selected for the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label or they don't contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.

    Select Disks
      On this screen you must select the disks for installing Solaris software.
      Start by looking at the Suggested Minimum field; this value is the
      approximate space needed to install the software you've selected. For ZFS,
      multiple disks will be configured as mirrors, so the disk you choose, or the
      slice within the disk must exceed the Suggested Minimum value.
      NOTE: ** denotes current boot disk
    
      Disk Device                                              Available Space
      =============================================================================
      [X] ** c1t0d0                                           139989 MB  (F4 to edit
    ) [X]    c1t1d0                                           139989 MB
      [ ]    c1t2d0                                           139989 MB
      [ ]    c1t3d0                                           139989 MB
      [ ]    c2t0d0                                           139989 MB
      [ ]    c2t1d0                                           139989 MB
      [ ]    c2t2d0                                           139989 MB
      [ ]    c2t3d0                                           139989 MB
    
                                      Maximum Root Size: 139989 MB
                                      Suggested Minimum:  11102 MB

    If the Available Space column identifies 0 MB, the disk most likely has an EFI label. If you want to use a disk with an EFI label, you must exit the installation program, relabel the disk with an SMI label by using the format -e command, and then restart the installation program.

    If you do not create a mirrored root pool during installation, you can easily create one after the installation. For information, see How to Create a Mirrored ZFS Root Pool (Postinstallation).

    After you have selected one or more disks for your ZFS storage pool, a screen similar to the following is displayed:

    Configure ZFS Settings
      Specify the name of the pool to be created from the disk(s) you have chosen.
      Also specify the name of the dataset to be created within the pool that is
      to be used as the root directory for the filesystem.
    
    
                  ZFS Pool Name: rpool
          ZFS Root Dataset Name: s10nameBE
          ZFS Pool Size (in MB): 139990
      Size of Swap Area (in MB): 4096
      Size of Dump Area (in MB): 1024
            (Pool size must be between 7006 MB and 139990 MB)
    
                             [X] Keep / and /var combined
                             [ ] Put /var on a separate dataset
  5. From this screen, you can optionally change the name of the ZFS pool, the dataset name, the pool size, and the swap and dump device sizes by moving the cursor control keys through the entries and replacing the default value with new values. Or, you can accept the default values. In addition, you can modify how the /var file system is created and mounted.

    In this example, the root dataset name is changed to zfsBE.

                  ZFS Pool Name: rpool
          ZFS Root Dataset Name: zfsBE
          ZFS Pool Size (in MB): 139990
      Size of Swap Area (in MB): 4096
      Size of Dump Area (in MB): 1024
            (Pool size must be between 7006 MB and 139990 MB)
  6. At this final installation, screen, you can optionally change the installation profile. For example:

    Profile
    
      The information shown below is your profile for installing Solaris software.
      It reflects the choices you've made on previous screens.
    
      ============================================================================
    
                    Installation Option: Initial
                            Boot Device: c1t0d0
                  Root File System Type: ZFS
                        Client Services: None
    
                                Regions: North America
                          System Locale: C ( C )
    
                               Software: Solaris 10, Entire Distribution
                              Pool Name: rpool
                  Boot Environment Name: zfsBE
                              Pool Size: 139990 MB
                        Devices in Pool: c1t0d0
                                         c1t1d0
  7. After the installation is completed, review the resulting ZFS storage pool and file system information. For example:

    # zpool status
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0
    
    errors: No known data errors
    # zfs list
    NAME                USED  AVAIL  REFER  MOUNTPOINT
    rpool              10.1G   124G   106K  /rpool
    rpool/ROOT         5.01G   124G    31K  legacy
    rpool/ROOT/zfsBE   5.01G   124G  5.01G  /
    rpool/dump         1.00G   124G  1.00G  -
    rpool/export         63K   124G    32K  /export
    rpool/export/home    31K   124G    31K  /export/home
    rpool/swap         4.13G   124G  4.00G  -

    The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.

  8. To create another ZFS boot environment (BE) in the same storage pool, use the lucreate command.

    In the following example, a new BE named zfs2BE is created. The current BE is named zfsBE, as shown in the zfs list output. However, the current BE is not acknowledged in the lustatus output until the new BE is created.

    # lustatus
    ERROR: No boot environments are configured on this system
    ERROR: cannot determine list of all boot environment names

    If you create a new ZFS BE in the same pool, use syntax similar to the following:

    # lucreate -n zfs2BE
    INFORMATION: The current boot environment is not named - assigning name <zfsBE>.
    Current boot environment is named <zfsBE>.
    Creating initial configuration for primary boot environment <zfsBE>.
    The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID.
    PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>.
    Comparing source boot environment <zfsBE> file systems with the file 
    system(s) you specified for the new boot environment. Determining which 
    file systems should be in the new boot environment.
    Updating boot environment description database on all BEs.
    Updating system configuration files.
    Creating configuration for boot environment <zfs2BE>.
    Source boot environment is <zfsBE>.
    Creating boot environment <zfs2BE>.
    Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>.
    Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>.
    Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>.
    Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>.
    Population of boot environment <zfs2BE> successful.
    Creation of boot environment <zfs2BE> successful.

    Creating a ZFS BE within the same pool uses ZFS clone and snapshot features to instantly create the BE. For more details about using Live Upgrade for a ZFS root migration, see Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade).

  9. Next, verify the new boot environments. For example:

    # lustatus
    Boot Environment           Is       Active Active    Can    Copy      
    Name                       Complete Now    On Reboot Delete Status    
    -------------------------- -------- ------ --------- ------ ----------
    zfsBE                      yes      yes    yes       no     -         
    zfs2BE                     yes      no     no        yes    -        
    # zfs list
    NAME                      USED  AVAIL  REFER  MOUNTPOINT
    rpool                    10.1G   124G   106K  /rpool
    rpool/ROOT               5.00G   124G    31K  legacy
    rpool/ROOT/zfs2BE         218K   124G  5.00G  /
    rpool/ROOT/zfsBE         5.00G   124G  5.00G  /
    rpool/ROOT/zfsBE@zfs2BE   104K      -  5.00G  -
    rpool/dump               1.00G   124G  1.00G  -
    rpool/export               63K   124G    32K  /export
    rpool/export/home          31K   124G    31K  /export/home
    rpool/swap               4.13G   124G  4.00G  -
  10. To boot from an alternate BE, use the luactivate command.

    • SPARC - Use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool.

      For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfs2BE, select option 2. Then, type the displayed boot -Z command.

      ok boot -L
      Executing last command: boot -L                                       
      Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0  File and args: -L
      1 zfsBE
      2 zfs2BE
      Select environment to boot: [ 1 - 2 ]: 2
      
      To boot the selected entry, invoke:
      boot [<root-device>] -Z rpool/ROOT/zfs2BE
      ok boot -Z rpool/ROOT/zfs2BE
    • x86 – Identify the BE to be booted from the GRUB menu.

For more information about booting a ZFS file system, see Booting From a ZFS Root File System.

How to Create a Mirrored ZFS Root Pool (Postinstallation)

If you did not create a mirrored ZFS root pool during installation, you can easily create one after installation.

For information about replacing a disk in a root pool, see How to Replace a Disk in the ZFS Root Pool.

  1. Display the current root pool status.
    # zpool status rpool
      pool: rpool
     state: ONLINE
     scrub: none requested
    config:
    
            NAME        STATE     READ WRITE CKSUM
            rpool       ONLINE       0     0     0
              c1t0d0s0  ONLINE       0     0     0
    
    errors: No known data errors
  2. Attach a second disk to configure a mirrored root pool.
    # zpool attach rpool c1t0d0s0 c1t1d0s0
    Make sure to wait until resilver is done before rebooting.
  3. View the root pool status to confirm that resilvering is complete.
    # zpool status rpool
      pool: rpool
     state: ONLINE
    status: One or more devices is currently being resilvered.  The pool will
            continue to function, possibly in a degraded state.
    action: Wait for the resilver to complete.
     scrub: resilver in progress for 0h1m, 24.26% done, 0h3m to go
    config:
    
            NAME          STATE     READ WRITE CKSUM
            rpool         ONLINE       0     0     0
              mirror-0    ONLINE       0     0     0
                c1t0d0s0  ONLINE       0     0     0
                c1t1d0s0  ONLINE       0     0     0  3.18G resilvered
    
    errors: No known data errors

    In the preceding output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:

    resilvered 10.0G in 0h10m with 0 errors on Thu Nov 15 12:48:33 2012
  4. Verify that you can boot successfully from the second disk.
  5. If necessary, set up the system to boot automatically from the new disk.
    • SPARC - Use the eeprom command or the setenv command from the SPARC boot PROM to reset the default boot device.

    • x86 - reconfigure the system BIOS.