Go to main content

Managing ZFS File Systems in Oracle® Solaris 11.3

Exit Print View

Updated: May 2019
 
 

Requirements for Configuring the ZFS Root Pool

ZFS is the default root file system of Oracle Solaris. The root pool contains the boot environment (BE) and is automatically created during the installation.

ZFS Root Pool Space Requirements

The size of the swap dump volumes depend on the amount of physical memory. The minimum amount of pool space for a bootable ZFS root file system depends upon the amount of physical memory, the disk space available, and the number of BEs to be created.

The 7 GB-13 GB minimum disk space recommended in Hardware and Software Requirements is consumed as follows:

  • Swap area and dump device – The default sizes of the swap and dump volumes that the installation program creates vary based on variables such as the amount of system memory. The dump device size is approximately half the size of physical memory or greater, depending on the system's activity.

    You can adjust the sizes of your swap and dump volumes during or after installation. The new sizes must support system operation. See Adjusting the Sizes of ZFS Swap and Dump Devices.

  • Boot environment – A ZFS BE is approximately 4 GB-6 GB. Each ZFS BE that is cloned from another ZFS BE does not need additional disk space. The BE size will increase when it is updated, depending on the updates. All ZFS BEs in the same root pool use the same swap and dump devices.

  • Oracle Solaris Components – All subdirectories of the root file system except /var that are part of the OS image must be in the root file system. All Oracle Solaris components except the swap and dump devices must reside in the root pool.

ZFS Root Pool Configuration Recommendations

Follow these guidelines when configuring the ZFS root pool:

  • If you are using EFI (GPT) labeled disks, create the root pool on mirrored whole disks. If you are using SMI (VTOC) labeled disks, create the root pool on mirrored slices.

    In most cases, x86 systems and SPARC systems with GPT aware firmware have EFI (GPT) labeled disks. Otherwise, SPARC systems have SMI (VTOC) labeled disks.

  • Do not rename the root pool that is created after an initial installation because the system might become unbootable. Also, as best practice, do not change any default settings, such as the mountpoint, of the root pool. Otherwise, errors might occur in subsequent operations on the boot environment.

  • Do not use a thinly provisioned VMware device for a root pool device.

Root pools have the following limitations:

  • RAID-Z or striped configurations are not supported for root pools.

  • Root pools cannot have a separate log device.

  • You cannot configure multiple top-level virtual devices on root pools. However, you can expand a mirrored root pool by attaching additional devices.

  • The gzip and lz4 compression algorithms re not supported on root pools.

Installing the ZFS Root Pool

As documented in Installing Oracle Solaris 11.3 Systems, you can use Live Media, text installer, or Automated Installer (AI) with the AI manifest to install Oracle Solaris. All three methods automatically install a ZFS root pool on a single disk. The installation also configures swap and dump devices on ZFS volumes on the root pool.

The AI method offers more flexibility in installing the root pool. In the AI manifest, you can specify the disks to use to create a mirrored root pool as well as enable ZFS properties, as shown in Example 16, Modifying the AI Manifest to Customize Root Pool Installation.

After Oracle Solaris is completely installed, perform the following actions:

Example 16  Modifying the AI Manifest to Customize Root Pool Installation

This example shows how to customize the AI manifest to perform the following:

  • Create a mirrored root pool consisting of c1t0d0 and c2t0d0.

  • Enable the root pool's listsnaps property.

<target>
<disk whole_disk="true" in_zpool="rpool" in_vdev="mirrored">
<disk_name name="c1t0d0" name_type="ctd"/>
</disk>
<disk whole_disk="true" in_zpool="rpool" in_vdev="mirrored">
<disk_name name="c2t0d0" name_type="ctd"/>
</disk>
<logical>
<zpool name="rpool" is_root="true">
<vdev name="mirrored" redundancy="mirror"/>
<!--
...
-->
<filesystem name="export" mountpoint="/export"/>
<filesystem name="export/home"/>
<pool_options>
<option name="listsnaps" value="on"/>
</pool_options>
<be name="solaris"/>
</zpool>
</logical>
</target>
Example 17  Sample Root Pool Configuration

The following example shows a mirrored root pool and file system configuration after an AI installation with a customized manifest.

# zpool status rpool
pool: rpool
state: ONLINE
scan: none requested
config:

NAME         STATE     READ WRITE CKSUM
rpool        ONLINE       0     0     0
   mirror-0  ONLINE       0     0     0
     c8t0d0  ONLINE       0     0     0
     c8t1d0  ONLINE       0     0     0

# zfs list
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool                    11.8G  55.1G  4.58M  /rpool
rpool/ROOT               3.57G  55.1G    31K  legacy
rpool/ROOT/solaris       3.57G  55.1G  3.40G  /
rpool/ROOT/solaris/var    165M  55.1G   163M  /var
rpool/VARSHARE           42.5K  55.1G  42.5K  /var/share
rpool/dump               6.19G  55.3G  6.00G  -
rpool/export               63K  55.1G    32K  /export
rpool/export/home          31K  55.1G    31K  /export/home
rpool/swap               2.06G  55.2G  2.00G  -