Skip Navigation Links | |
Exit Print View | |
Oracle Solaris ZFS Administration Guide Oracle Solaris 11 Express 11/10 |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Oracle Solaris ZFS and Traditional File System Differences
4. Managing Oracle Solaris ZFS Storage Pools
5. Managing ZFS Root Pool Components
Managing ZFS Root Pool Components (Overview)
Oracle Solaris 11 Express Installation Requirements for ZFS Support
Oracle Solaris 11 Express Release Requirements
General ZFS Storage Pool Requirements
ZFS Storage Pool Space Requirements
ZFS Storage Pool Configuration Requirements
Managing Your ZFS Swap and Dump Devices
Adjusting the Sizes of Your ZFS Swap and Dump Devices
Troubleshooting ZFS Dump Device Issues
Booting From a ZFS Root File System
Booting From an Alternate Disk in a Mirrored ZFS Root Pool
Booting From a ZFS Root File System on a SPARC Based System
Booting From a ZFS Root File System on an x86 Based System
Booting For Recovery Purposes in a ZFS Root Environment
How to Boot ZFS for Recovery Purposes
Recovering the ZFS Root Pool or Root Pool Snapshots
How to Replace a Disk in the ZFS Root Pool
How to Create Root Pool Snapshots
How to Recreate a ZFS Root Pool and Restore Root Pool Snapshots
6. Managing Oracle Solaris ZFS File Systems
7. Working With Oracle Solaris ZFS Snapshots and Clones
8. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
9. Oracle Solaris ZFS Delegated Administration
10. Oracle Solaris ZFS Advanced Topics
11. Oracle Solaris ZFS Troubleshooting and Pool Recovery
The following sections provide information about installing and updating a ZFS root pool and configuring a mirrored root pool.
The Oracle Solaris 11 Express Live CD installation method installs a default ZFS root pool on a single disk. With the Oracle Solaris 11 Express automated installation (AI) method, you can create an AI manifest with the <ai_target_device> tag to identify the disk that is used to install the ZFS root pool. If you do not identify a target disk for the root pool, the default target disk is selected as follows:
The installer searches for a disk based on a recommended size of approximately 13 GB
The disks are searched based on an order determined by the libdiskmgt library
The installer selects the first disk that matches the recommended size
If no disk matches the recommended size, the automated installation fails
The AI installer provides the flexibility of installing a ZFS root pool on the default boot disk or on a target disk that you identify. You can specify the logical device, such as c1t0d0s0, or the physical device path. In addition, you can use the MPxIO identifier or the device ID for the device to be installed.
Also keep in mind that the disk intended for the root pool must have an SMI label. Otherwise, the installation will fail.
Similar to the Oracle Solaris 11 Express Live CD installation method, you can only install a root pool onto one disk with the automated installer. See the next section for configuring a mirrored root pool.
After the installation, review your ZFS storage pool and file system information. For example:
# zpool status pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 errors: No known data errors # zfs list rpool 22.9G 111G 76.5K /rpool rpool/ROOT 6.80G 111G 31K legacy rpool/ROOT/solaris 6.80G 111G 5.20G / rpool/dump 7.94G 111G 7.94G - rpool/export 614K 111G 32K /export rpool/export/home 582K 111G 32K /export/home rpool/export/home/admin 550K 111G 550K /export/home/admin rpool/swap 8.19G 119G 14.2M -
Review your ZFS BE information. For example:
# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 8.41G static 2011-01-13 15:31
In the above output, NR means now running.
The default ZFS boot environment (BE) is named solaris by default. You can identify your BEs by using the beadm list command. For example:
# beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris NR / 8.41G static 2011-01-13 15:31
In the above output, NR means now running.
You can use the pkg image command to update your ZFS boot environment. If you update your ZFS BE by using the pkg update command, a new BE is created and activated automatically, unless the updates to the existing BE are very minimal.
# pkg update DOWNLOAD PKGS FILES XFER (MB) Completed 707/707 10529/10529 194.9/194.9 . . .
A new BE, solaris-1, is created automatically and activated.
# init 6 . . . # beadm list BE Active Mountpoint Space Policy Created -- ------ ---------- ----- ------ ------- solaris - - 19.18M static 2011-01-13 15:31 solaris-1 NR / 8.43G static 2011-01-13 15:44
You cannot configure a mirrored root pool with any of the Oracle Solaris 11 Express installation methods, but you can easily configure a mirrored root pool after the installation.
For information about replacing a disk in root pool, see How to Replace a Disk in the ZFS Root Pool.
# zpool status rpool pool: rpool state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 errors: No known data errors
# zpool attach rpool c1t3d0s0 c1t2d0s0 Make sure to wait until resilver is done before rebooting.
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Thu Jan 13 15:54:54 2011 2.16G scanned out of 14.8G at 71.3M/s, 0h3m to go 2.16G resilvered, 14.59% done config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 (resilvering) errors: No known data errors
In the above output, the resilvering process is not complete. Resilvering is complete when you see messages similar to the following:
scrub: resilver completed after 0h10m with 0 errors on Thu Mar 11 11:27:22 2010