In this Solaris release, you can perform an initial installation by using the Solaris interactive text installer to create a ZFS storage pool that contains a bootable ZFS root file system. If you have an existing ZFS storage pool that you want to use for your ZFS root file system, then you must use Oracle Solaris Live Upgrade to migrate your existing UFS root file system to a ZFS root file system in an existing ZFS storage pool. For more information, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).
If you will be configuring zones after the initial installation of a ZFS root file system and you plan on patching or upgrading the system, see Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08) or Using Oracle Solaris Live Upgrade to Migrate or Upgrade a System With Zones (at Least Solaris 10 5/09).
If you already have ZFS storage pools on the system, they are acknowledged by the following message. However, these pools remain untouched, unless you select the disks in the existing pools to create the new storage pool.
There are existing ZFS pools available on this system. However, they can only be upgraded using the Live Upgrade tools. The following screens will only allow you to install a ZFS root system, not upgrade one. |
Existing pools will be destroyed if any of their disks are selected for the new pool.
Before you begin the initial installation to create a ZFS storage pool, see Oracle Solaris Installation and Oracle Solaris Live Upgrade Requirements for ZFS Support.
The Solaris interactive text installation process is basically the same as in previous Solaris releases, except that you are prompted to create a UFS or a ZFS root file system. UFS is the still the default file system in this release. If you select a ZFS root file system, you are prompted to create a ZFS storage pool. The steps for installing a ZFS root file system follow:
Select the Solaris interactive installation method because a Solaris Flash installation is not available to create a bootable ZFS root file system. However, you can create a ZFS flash archive to be used during a JumpStart installation. For more information, see Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation).
Starting in the Solaris 10 10/08 release, you can migrate a UFS root file system to a ZFS root file system as long as at least the Solaris 10 10/08 release is already installed. For more information about migrating to a ZFS root file system, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).
To create a ZFS root file system, select the ZFS option. For example:
Choose Filesystem Type Select the filesystem to use for your Solaris installation [ ] UFS [X] ZFS |
After you select the software to be installed, you are prompted to select the disks to create your ZFS storage pool. This screen is similar as in previous Solaris releases.
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] c1t0d0 69994 MB (F4 to edit) [ ] c1t1d0 69994 MB [-] c1t2d0 0 MB [-] c1t3d0 0 MB Maximum Root Size: 69994 MB Suggested Minimum: 8279 MB |
You can select the disk or disks to be used for your ZFS root pool. If you select two disks, a mirrored two-disk configuration is set up for your root pool. Either a two-disk or a three-disk mirrored pool is optimal. If you have eight disks and you select all of them, those eight disks are used for the root pool as one big mirror. This configuration is not optimal. Another option is to create a mirrored root pool after the initial installation is complete. A RAID-Z pool configuration for the root pool is not supported. For more information about configuring ZFS storage pools, see Replication Features of a ZFS Storage Pool.
To select two disks to create a mirrored root pool, use the cursor control keys to select the second disk. In the following example, both c1t1d0 and c1t2d0 are selected as the root pool disks. Both disks must have an SMI label and a slice 0. If the disks are not labeled with an SMI label or they don't contain slices, then you must exit the installation program, use the format utility to relabel and repartition the disks, and then restart the installation program.
Select Disks On this screen you must select the disks for installing Solaris software. Start by looking at the Suggested Minimum field; this value is the approximate space needed to install the software you've selected. For ZFS, multiple disks will be configured as mirrors, so the disk you choose, or the slice within the disk must exceed the Suggested Minimum value. NOTE: ** denotes current boot disk Disk Device Available Space ============================================================================= [X] c1t0d0 69994 MB [X] c1t1d0 69994 MB (F4 to edit) [-] c1t2d0 0 MB [-] c1t3d0 0 MB Maximum Root Size: 69994 MB Suggested Minimum: 8279 MB |
If the Available Space column identifies 0 MB, the disk most likely has an EFI label. If you want to use a disk with an EFI label, you will need to exit the installation program, relabel the disk with an SMI label by using the format -e command, then restart the installation program.
If you do not create a mirrored root pool during installation, you can easily create one after the installation. For information, see How to Create a Mirrored Root Pool (Post Installation).
After you have selected a disk or disks for your ZFS storage pool, a screen similar to the following is displayed:
Configure ZFS Settings Specify the name of the pool to be created from the disk(s) you have chosen. Also specify the name of the dataset to be created within the pool that is to be used as the root directory for the filesystem. ZFS Pool Name: rpool ZFS Root Dataset Name: s10s_u9wos_08 ZFS Pool Size (in MB): 69995 Size of Swap Area (in MB): 2048 Size of Dump Area (in MB): 1536 (Pool size must be between 6231 MB and 69995 MB) [X] Keep / and /var combined [ ] Put /var on a separate dataset |
From this screen, you can change the name of the ZFS pool, the dataset name, the pool size, and the swap and dump device sizes by moving the cursor control keys through the entries and replacing the default value with new values. Or, you can accept the default values. In addition, you can modify how the /var file system is created and mounted.
In this example, the root dataset name is changed to zfsBE.
ZFS Pool Name: rpool ZFS Root Dataset Name: zfsBE ZFS Pool Size (in MB): 69995 Size of Swap Area (in MB): 2048 Size of Dump Area (in MB): 1536 (Pool size must be between 6231 MB and 69995 MB) [X] Keep / and /var combined [ ] Put /var on a separate dataset |
You can change the installation profile at this final installation screen. For example:
Profile The information shown below is your profile for installing Solaris software. It reflects the choices you've made on previous screens. ============================================================================ Installation Option: Initial Boot Device: c1t0d0 Root File System Type: ZFS Client Services: None Regions: North America System Locale: C ( C ) Software: Solaris 10, Entire Distribution Pool Name: rpool Boot Environment Name: zfsBE Pool Size: 69995 MB Devices in Pool: c1t0d0 c1t1d0 |
After the installation is completed, review the resulting ZFS storage pool and file system information. For example:
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 errors: No known data errors # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 8.03G 58.9G 96K /rpool rpool/ROOT 4.47G 58.9G 21K legacy rpool/ROOT/zfsBE 4.47G 58.9G 4.47G / rpool/dump 1.50G 58.9G 1.50G - rpool/export 44K 58.9G 23K /export rpool/export/home 21K 58.9G 21K /export/home rpool/swap 2.06G 61.0G 16K - |
The sample zfs list output identifies the root pool components, such as the rpool/ROOT directory, which is not accessible by default.
To create another ZFS boot environment (BE) in the same storage pool, you can use the lucreate command. In the following example, a new BE named zfs2BE is created. The current BE is named zfsBE, as shown in the zfs list output. However, the current BE is not acknowledged in the lustatus output until the new BE is created.
# lustatus ERROR: No boot environments are configured on this system ERROR: cannot determine list of all boot environment names |
If you create a new ZFS BE in the same pool, use syntax similar to the following:
# lucreate -n zfs2BE INFORMATION: The current boot environment is not named - assigning name <zfsBE>. Current boot environment is named <zfsBE>. Creating initial configuration for primary boot environment <zfsBE>. The device </dev/dsk/c1t0d0s0> is not a root device for any boot environment; cannot get BE ID. PBE configuration successful: PBE name <zfsBE> PBE Boot Device </dev/dsk/c1t0d0s0>. Comparing source boot environment <zfsBE> file systems with the file system(s) you specified for the new boot environment. Determining which file systems should be in the new boot environment. Updating boot environment description database on all BEs. Updating system configuration files. Creating configuration for boot environment <zfs2BE>. Source boot environment is <zfsBE>. Creating boot environment <zfs2BE>. Cloning file systems from boot environment <zfsBE> to create boot environment <zfs2BE>. Creating snapshot for <rpool/ROOT/zfsBE> on <rpool/ROOT/zfsBE@zfs2BE>. Creating clone for <rpool/ROOT/zfsBE@zfs2BE> on <rpool/ROOT/zfs2BE>. Setting canmount=noauto for </> in zone <global> on <rpool/ROOT/zfs2BE>. Population of boot environment <zfs2BE> successful. Creation of boot environment <zfs2BE> successful. |
Creating a ZFS BE within the same pool uses ZFS clone and snapshot features to instantly create the BE. For more details about using Oracle Solaris Live Upgrade for a ZFS root migration, see Migrating a UFS Root File System to a ZFS Root File System (Oracle Solaris Live Upgrade).
Next, verify the new boot environments. For example:
# lustatus Boot Environment Is Active Active Can Copy Name Complete Now On Reboot Delete Status -------------------------- -------- ------ --------- ------ ---------- zfsBE yes yes yes no - zfs2BE yes no no yes - # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 8.03G 58.9G 97K /rpool rpool/ROOT 4.47G 58.9G 21K legacy rpool/ROOT/zfs2BE 116K 58.9G 4.47G / rpool/ROOT/zfsBE 4.47G 58.9G 4.47G / rpool/ROOT/zfsBE@zfs2BE 75.5K - 4.47G - rpool/dump 1.50G 58.9G 1.50G - rpool/export 44K 58.9G 23K /export rpool/export/home 21K 58.9G 21K /export/home rpool/swap 2.06G 61.0G 16K - |
To boot from an alternate BE, use the luactivate command. After you activate the BE on a SPARC based system, use the boot -L command to identify the available BEs when the boot device contains a ZFS storage pool. When booting from an x86 based system, identify the BE to be booted from the GRUB menu.
For example, on a SPARC based system, use the boot -L command to display a list of available BEs. To boot from the new BE, zfs2BE, select option 2. Then, type the displayed boot -Z command.
ok boot -L Executing last command: boot -L Boot device: /pci@7c0/pci@0/pci@1/pci@0,2/LSILogic,sas@2/disk@0 File and args: -L 1 zfsBE 2 zfs2BE Select environment to boot: [ 1 - 2 ]: 2 To boot the selected entry, invoke: boot [<root-device>] -Z rpool/ROOT/zfs2BE ok boot -Z rpool/ROOT/zfs2BE |
For more information about booting a ZFS file system, see Booting From a ZFS Root File System.
If you did not create a mirrored ZFS root pool during installation, you can easily create one after the installation.
For information about replacing a disk in root pool, see How to Replace a Disk in the ZFS Root Pool.
Display your current root pool status.
# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 errors: No known data errors |
Attach a second disk to configure a mirrored root pool.
# zpool attach rpool c1t0d0s0 c1t1d0s0 Please be sure to invoke installboot(1M) to make 'c1t1d0s0' bootable. Make sure to wait until resilver is done before rebooting. |
View the root pool status to confirm that resilvering is complete.
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress for 0h1m, 24.26% done, 0h3m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t0d0s0 ONLINE 0 0 0 c1t1d0s0 ONLINE 0 0 0 3.18G resilvered errors: No known data errors |
In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:
scrub: resilver completed after 0h10m with 0 errors on Thu Mar 11 11:27:22 2010 |
Apply boot blocks to the second disk after resilvering is complete.
sparc# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0 |
x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0 |
Verify that you can boot successfully from the second disk.
Set up the system to boot automatically from the new disk, either by using the eeprom command, the setenv command from the SPARC boot PROM. Or, reconfigure the PC BIOS.