ZFS is the default root file system of Oracle Solaris. The root pool contains the boot environment (BE) and is automatically created during the installation.
The size of the swap dump volumes depend on the amount of physical memory. The minimum amount of pool space for a bootable ZFS root file system depends upon the amount of physical memory, the disk space available, and the number of BEs to be created.
The 7 GB-13 GB minimum disk space recommended in Hardware and Software Requirements is consumed as follows:
Swap area and dump device – The default sizes of the swap and dump volumes that the installation program creates vary based on variables such as the amount of system memory. The dump device size is approximately half the size of physical memory or greater, depending on the system's activity.
You can adjust the sizes of your swap and dump volumes during or after installation. The new sizes must support system operation. See Adjusting the Sizes of ZFS Swap and Dump Devices.
Boot environment – A ZFS BE is approximately 4 GB-6 GB. Each ZFS BE that is cloned from another ZFS BE does not need additional disk space. The BE size will increase when it is updated, depending on the updates. All ZFS BEs in the same root pool use the same swap and dump devices.
Oracle Solaris Components – All subdirectories of the root file system except /var that are part of the OS image must be in the root file system. All Oracle Solaris components except the swap and dump devices must reside in the root pool.
Follow these guidelines when configuring the ZFS root pool:
If you are using EFI (GPT) labeled disks, create the root pool on mirrored whole disks. If you are using SMI (VTOC) labeled disks, create the root pool on mirrored slices.
In most cases, x86 systems and SPARC systems with GPT aware firmware have EFI (GPT) labeled disks. Otherwise, SPARC systems have SMI (VTOC) labeled disks.
Do not rename the root pool that is created after an initial installation because the system might become unbootable.
Do not use a thinly provisioned VMware device for a root pool device.
Root pools have the following limitations:
RAID-Z or striped configurations are not supported for root pools.
Root pools cannot have a separate log device.
You cannot configure multiple top-level virtual devices on root pools. However, you can expand a mirrored root pool by attaching additional devices.
The gzip and lz4 compression algorithms re not supported on root pools.
As documented in Installing Oracle Solaris 11.3 Systems, you can use Live Media, text installer, or Automated Installer (AI) with the AI manifest to install Oracle Solaris. All three methods automatically install a ZFS root pool on a single disk. The installation also configures swap and dump devices on ZFS volumes on the root pool.
The AI method offers more flexibility in installing the root pool. In the AI manifest, you can specify the disks to use to create a mirrored root pool as well as enable ZFS properties, as shown in Example 16, Modifying the AI Manifest to Customize Root Pool Installation.
After Oracle Solaris is completely installed, perform the following actions:
If the installation created a root pool on a single disk, then manually convert the pool into a mirrored configuration. See How to Configure a Mirrored Root Pool (SPARC or x86/VTOC).
Set a quota on the ZFS root file system to prevent the root file system from filling up. Currently, no ZFS root pool space is reserved as a safety net for a full file system. For example, if you have a 68 GB disk for the root pool, consider setting a 67 GB quota on the ZFS root file system (rpool/ROOT/solaris) to allow for 1 GB of remaining file system space. See Setting Quotas on ZFS File Systems.
Create a root pool recovery archive for disaster recovery or for migration purposes by using the Oracle Solaris archive utility. For more information, refer to Using Unified Archives for System Recovery and Cloning in Oracle Solaris 11.3 and the archiveadm(1M) man page.
This example shows how to customize the AI manifest to perform the following:
Create a mirrored root pool consisting of c1t0d0 and c2t0d0.
Enable the root pool's listsnaps property.
<target> <disk whole_disk="true" in_zpool="rpool" in_vdev="mirrored"> <disk_name name="c1t0d0" name_type="ctd"/> </disk> <disk whole_disk="true" in_zpool="rpool" in_vdev="mirrored"> <disk_name name="c2t0d0" name_type="ctd"/> </disk> <logical> <zpool name="rpool" is_root="true"> <vdev name="mirrored" redundancy="mirror"/> <!-- ... --> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <pool_options> <option name="listsnaps" value="on"/> </pool_options> <be name="solaris"/> </zpool> </logical> </target>Example 17 Sample Root Pool Configuration
The following example shows a mirrored root pool and file system configuration after an AI installation with a customized manifest.
# zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 11.8G 55.1G 4.58M /rpool rpool/ROOT 3.57G 55.1G 31K legacy rpool/ROOT/solaris 3.57G 55.1G 3.40G / rpool/ROOT/solaris/var 165M 55.1G 163M /var rpool/VARSHARE 42.5K 55.1G 42.5K /var/share rpool/dump 6.19G 55.3G 6.00G - rpool/export 63K 55.1G 32K /export rpool/export/home 31K 55.1G 31K /export/home rpool/swap 2.06G 55.2G 2.00G -