Installing the ZFS Root Pool
As documented in Manually Installing an Oracle Solaris 11.4 System , you can use text installer or Automated Installer (AI) with the AI manifest to install Oracle Solaris. Both methods automatically install a ZFS root pool on a single disk. The installation also configures swap and dump devices on ZFS volumes on the root pool.
Note:
Starting with Oracle Solaris 11.4 SRU 36, thecompression
property is set to on
by default. This property setting reduces space consumption in the root pool and might improve system performance.
The AI method offers more flexibility in installing the root pool. In the AI manifest, you can specify the disks to use to create a mirrored root pool as well as enable ZFS properties, as shown in Example 6-1.
After Oracle Solaris is completely installed, perform the following actions:
-
If the installation created a root pool on a single disk, then manually convert the pool into a mirrored configuration. See How to Configure a Mirrored Root Pool (SPARC or x86/VTOC).
-
Set a quota on the ZFS root file system to prevent the root file system from filling up. Currently, no ZFS root pool space is reserved as a safety net for a full file system. For example, if you have a 68 GB disk for the root pool, consider setting a 67 GB quota on the ZFS root file system (
rpool/ROOT/solaris
) to allow for 1 GB of remaining file system space. See Setting Quotas on ZFS File Systems. -
Create a root pool recovery archive for disaster recovery or for migration purposes by using the Oracle Solaris archive utility. For more information, refer to Using Unified Archives for System Recovery and Cloning in Oracle Solaris 11.4 and the
archiveadm
(8) man page.
Example 6-1 Modifying the AI Manifest to Customize Root Pool Installation
This example shows how to customize the AI manifest to perform the following:
-
Create a mirrored root pool consisting of
c1t0d0
andc2t0d0
. -
Enable the root pool's
listsnaps
property.
<target> <disk whole_disk="true" in_zpool="rpool" in_vdev="mirrored"> <disk_name name="c1t0d0" name_type="ctd"/> </disk> <disk whole_disk="true" in_zpool="rpool" in_vdev="mirrored"> <disk_name name="c2t0d0" name_type="ctd"/> </disk> <logical> <zpool name="rpool" is_root="true"> <vdev name="mirrored" redundancy="mirror"/> <!-- ... --> <filesystem name="export" mountpoint="/export"/> <filesystem name="export/home"/> <pool_options> <option name="listsnaps" value="on"/> </pool_options> <be name="solaris"/> </zpool> </logical> </target>
Example 6-2 Sample Root Pool Configuration
The following example shows a mirrored root pool and file system configuration after an AI installation with a customized manifest.
$ zpool status rpool pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c8t0d0 ONLINE 0 0 0 c8t1d0 ONLINE 0 0 0 $ zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 11.8G 55.1G 4.58M /rpool rpool/ROOT 3.57G 55.1G 31K legacy rpool/ROOT/solaris 3.57G 55.1G 3.40G / rpool/ROOT/solaris/var 165M 55.1G 163M /var rpool/VARSHARE 42.5K 55.1G 42.5K /var/share rpool/dump 6.19G 55.3G 6.00G - rpool/export 63K 55.1G 32K /export rpool/export/home 31K 55.1G 31K /export/home rpool/swap 2.06G 55.2G 2.00G -