In this Solaris release, you can perform an initial installation by using the Solaris interactive text installer to create a ZFS storage pool that contains a bootable ZFS root file system. If you have an existing ZFS storage pool that you want to use for your ZFS root file system, then you must use Solaris Live Upgrade to migrate your existing UFS root file system to a ZFS root file system in an existing ZFS storage pool. For more information, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).
If you already have ZFS storage pools on the system, they are acknowledged by the following message, but remain untouched, unless you select the disks in the existing pools to create the new storage pool.
There are existing ZFS pools available on this system. However, they can only be upgraded using the Live Upgrade tools. The following screens will only allow you to install a ZFS root system, not upgrade one. |
Existing pools will be destroyed if any of their disks are selected for the new pool.
Before you begin the initial installation to create a ZFS storage pool, see Solaris Installation and Solaris Live Upgrade Requirements for ZFS Support.
The Solaris interactive text installation process is basically the same as previous Solaris releases, except that you are prompted to create a UFS or ZFS root file system. UFS is the still the default file system in this release. If you select a ZFS root file system, you will be prompted to create a ZFS storage pool. Installing a ZFS root file system involves the following steps:
Select the Solaris interactive installation method because a Solaris Flash installation is not available to create a bootable ZFS root file system.
You can perform a standard upgrade to upgrade an existing bootable ZFS file system that is running the SXCE, build 90 release, but you cannot use this option to create a new bootable ZFS file system. Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system as long as the SXCE, build 90 release is already installed. For more information about migrating to a ZFS root file system, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).
If you want to create a ZFS root file system, select the ZFS option. For example:
Choose Filesystem Type Select the filesystem to use for your Solaris installation [ ] UFS [X] ZFS |
After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous Solaris releases:
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] ** c1t1d0 69994 MB [ ] c1t2d0 69994 MB (F4 to edit) Maximum Root Size: 69994 MB Suggested Minimum: 7466 MB |
You can select the disk or disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or three-disk mirrored pool is optimal. If you have eight disks and you select all eight disks, those eight disks are used for the root pool as one big mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported. For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.
If you want to select two disks to create a mirrored root pool, then use the cursor control keys to select the second disk. For example, both c1t1d0 and c1t2d0 are selected for the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label nor contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] ** c1t1d0 69994 MB [X] c1t2d0 69994 MB (F4 to edit) Maximum Root Size: 69994 MB Suggested Minimum: 7466 MB |
If the Available Space column identifies 0 MB, this generally indicates that the disk has an EFI label.
After you have selected a disk or disks for your ZFS storage pool, a screen that looks similar to the following is displayed:
Configure ZFS Settings Specify the name of the pool to be created from the disk(s) you have chosen. Also specify the name of the dataset to be created within the pool that is to be used as the root directory for the filesystem. ZFS Pool Name: rpool ZFS Root Dataset Name: snv_109 ZFS Pool Size (in MB): 69995 Size of Swap Area (in MB): 2048 Size of Dump Area (in MB): 1024 (Pool size must be between 10076 MB and 69995 MB) [X] Keep / and /var combined [ ] Put /var on a separate dataset |
From this screen, you can change the name of the ZFS pool, dataset name, pool size, and swap and dump device sizes by moving the cursor control keys through the entries and replacing the default text value with new text. Or, you can accept the default values. In addition, you can modify the way the /var file system is created and mounted.
In this example, the root dataset name is changed to zfsnv109BE.
ZFS Pool Name: rpool ZFS Root Dataset Name: zfsnv109BE ZFS Pool Size (in MB): 34731 (Pool size must be between 6413 MB and 34731 MB) |
You can change the installation profile at this final installation screen. For example:
Profile The information shown below is your profile for installing Solaris software. It reflects the choices you've made on previous screens. ============================================================================ Installation Option: Initial Boot Device: c1t1d0 Root File System Type: ZFS Client Services: None Regions: North America System Locale: C ( C ) Software: Solaris 11, Entire Distribution Pool Name: rpool Boot Environment Name: zfsnv109BE Pool Size: 69995 MB Devices in Pool: c1t1d0 |
After the installation is complete, review the resulting ZFS storage pool and file system information. For example:
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 10.4G 56.5G 64K /rpool rpool/ROOT 7.43G 56.5G 18K legacy rpool/ROOT/zfsnv109BE 7.43G 56.5G 7.43G / rpool/dump 1.00G 56.5G 1.00G - rpool/export 41K 56.5G 21K /export rpool/export/home 20K 56.5G 20K /export/home rpool/swap 2G 58.5G 5.34M - |
The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.
If you initially created your ZFS storage pool with one disk, you can convert it to a mirrored ZFS configuration after the installation completes by using the zpool attach command to attach an available disk. For example:
# zpool attach rpool c1t1d0s0 c1t2d0s0 # zpool status pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h0m, 5.03% done, 0h13m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 errors: No known data errors |
It will take some time to resilver the data to the new disk, but the pool is still available.
Until CR 6668666 is fixed, you will need to install the boot information on the additionally attached disks by using the installboot or installgrub commands if you want to enable booting on the other disks in the mirror. If you create a mirrored ZFS root pool with the initial installation method, then this step is unnecessary. For more information about installing boot information, see Booting From an Alternate Disk in a Mirrored ZFS Root Pool.
For more information about adding or attaching disks, see Managing Devices in ZFS Storage Pools.
If you want to create another ZFS boot environment (BE) in the same storage pool, you can use the lucreate command. In the following example, a new BE named zfsnv1092BE is created. The current BE is named zfsnv109BE, displayed in the zfs list output, is not acknowledged in the lustatus output until the new BE is created.
# lustatus ERROR: No boot environments are configured on this system ERROR: cannot determine list of all boot environment names |
If you create a new ZFS BE in the same pool, use syntax similar to the following:
# lucreate -n zfsnv1092BE Analyzing system configuration. Comparing source boot environment <zfsnv109BE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfsnv1092BE>. Source boot environment is <zfsnv109BE>. Creating boot environment <zfsnv1092BE>. Cloning file systems from boot environment <zfsnv109BE> to create boot environment <zfsnv1092BE>. Creating snapshot for <rpool/ROOT/zfsnv109BE> on <rpool/ROOT/zfsnv109BE@zfsnv1092BE>. Creating clone for <rpool/ROOT/zfsnv109BE@zfsnv1092BE> on <rpool/ROOT/zfsnv1092BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfsnv1092BE>. Population of boot environment <zfsnv1092BE> successful. Creation of boot environment <zfsnv1092BE> successful. |
Creating a ZFS BE within the same pool uses ZFS clone and snapshot features so the BE is created instantly. For more details about using Solaris Live Upgrade for a ZFS root migration, see Migrating a UFS Root File System to a ZFS Root File System (Solaris Live Upgrade).
Next, verify the new boot environments. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsnv109BE yes yes yes no - zfsnv1092BE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 10.4G 56.5G 64K /rpool rpool/ROOT 7.42G 56.5G 18K legacy rpool/ROOT/zfsnv1092BE 97K 56.5G 7.42G /tmp/.alt.luupdall.3244 rpool/ROOT/zfsnv109BE 7.42G 56.5G 7.42G / rpool/dump 1.00G 56.5G 1.00G - rpool/export 41K 56.5G 21K /export rpool/export/home 20K 56.5G 20K /export/home rpool/swap 2G 58.5G 5.34M - |
If you want to boot from an alternate BE, use the luactivate command. After you activate the BE on a SPARC-based system, use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool. When booting from an x86 based system, identify the BE to be booted from the GRUB menu.
For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfsnv1092BE, select option 2. Then, type the displayed boot -Z command.
ok boot -L Rebooting with command: boot -L Boot device: /pci@1f,0/pci@1/scsi@4,1/disk@2,0:a File and args: -L 1 zfsnv109BE 2 zfsnv1092BE Select environment to boot: [ 1 - 2 ]: 2 To boot the selected entry, invoke: boot [<root-device>] -Z rpool/ROOT/zfsnv1092BE Program terminated ok boot -Z rpool/ROOT/zfsnv1092BE |
For more information about booting a ZFS file system, see Booting From a ZFS Root File System.