Skip Navigation Links | |
Exit Print View | |
Oracle Solaris ZFS Administration Guide Oracle Solaris 10 1/13 Information Library |
1. Oracle Solaris ZFS File System (Introduction)
2. Getting Started With Oracle Solaris ZFS
3. Managing Oracle Solaris ZFS Storage Pools
4. Installing and Booting an Oracle Solaris ZFS Root File System
Installing and Booting an Oracle Solaris ZFS Root File System (Overview)
Oracle Solaris Installation and Live Upgrade Requirements for ZFS Support
Oracle Solaris Release Requirements
General ZFS Root Pool Requirements
Disk Space Requirements for ZFS Root Pools
ZFS Root Pool Configuration Requirements
Installing a ZFS Root File System (Oracle Solaris Initial Installation)
How to Create a Mirrored ZFS Root Pool (Postinstallation)
Installing a ZFS Root File System (Oracle Solaris Flash Archive Installation)
Installing a ZFS Root File System ( JumpStart Installation)
JumpStart Profile Examples for ZFS
Migrating to a ZFS Root File System or Updating a ZFS Root File System (Live Upgrade)
ZFS Migration Issues With Live Upgrade
Using Live Upgrade to Migrate or Update a ZFS Root File System (Without Zones)
Using Live Upgrade to Migrate or Upgrade a System With Zones (Solaris 10 10/08)
How to Configure a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
How to Upgrade or Patch a ZFS Root File System With Zone Roots on ZFS (Solaris 10 10/08)
Supported ZFS with Zone Root Configuration Information (at Least Solaris 10 5/09)
How to Create a ZFS BE With a ZFS Root File System and a Zone Root (at Least Solaris 10 5/09)
How to Upgrade or Patch a ZFS Root File System With Zone Roots (at Least Solaris 10 5/09)
Managing Your ZFS Swap and Dump Devices
Adjusting the Sizes of Your ZFS Swap Device and Dump Device
Customizing ZFS Swap and Dump Volumes
Troubleshooting ZFS Dump Device Issues
Booting From a ZFS Root File System
Booting From an Alternate Disk in a Mirrored ZFS Root Pool
SPARC: Booting From a ZFS Root File System
x86: Booting From a ZFS Root File System
Resolving ZFS Mount-Point Problems That Prevent Successful Booting (Solaris 10 10/08)
How to Resolve ZFS Mount-Point Problems
Booting for Recovery Purposes in a ZFS Root Environment
How to Boot ZFS From Alternate Media
Recovering the ZFS Root Pool or Root Pool Snapshots
How to Replace a Disk in the ZFS Root Pool
How to Create Root Pool Snapshots
How to Re-create a ZFS Root Pool and Restore Root Pool Snapshots
5. Managing Oracle Solaris ZFS File Systems
6. Working With Oracle Solaris ZFS Snapshots and Clones
7. Using ACLs and Attributes to Protect Oracle Solaris ZFS Files
8. Oracle Solaris ZFS Delegated Administration
9. Oracle Solaris ZFS Advanced Topics
10. Oracle Solaris ZFS Troubleshooting and Pool Recovery
11. Recommended Oracle Solaris ZFS Practices
The following sections describe how to perform the following tasks:
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace a smaller disk with a larger disk.
A root pool disk is failing. In a non-redundant pool, if the disk is failing such that the system won't boot, you must boot from alternate media, such as a DVD or the network, before you replace the root pool disk.
In a mirrored root pool configuration, you can attempt a disk replacement without booting from alternate media. You can replace a failed disk by using the zpool replace command. Or, if you have an additional disk, you can use the zpool attach command. See the procedure in this section for an example of attaching an additional disk and detaching a root pool disk.
Some hardware requires that you take a disk offline and unconfigure it before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0s0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 # zpool replace rpool c1t0d0s0 # zpool online rpool c1t0d0s0 # zpool status rpool <Let disk resilver before installing the boot blocks> SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
With some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
You must identify the boot device path names of the current disk and the new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if the replacement disk fails. In the example in the following procedure, the path name for the current root pool disk (c1t10d0s0) is:
/pci@8,700000/pci@3/scsi@5/sd@a,0
The path name for the replacement boot disk (c1t9d0s0) is:
/pci@8,700000/pci@3/scsi@5/sd@9,0
For information about relabeling a disk that is intended for the root pool, see the following references:
For example:
# zpool attach rpool c1t10d0s0 c1t9d0s0
For example:
# zpool status rpool pool: rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: resilver in progress, 25.47% done, 0h4m to go config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t10d0s0 ONLINE 0 0 0 c1t9d0s0 ONLINE 0 0 0 errors: No known data errors
Determine the existing rpool pool size:
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 29.8G 152K 29.7G 0% 1.00x ONLINE -
# zpool set autoexpand=on rpool
Review the expanded rpool pool size:
# zpool list rpool NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT rpool 279G 146K 279G 0% 1.00x ONLINE -
For example, on a SPARC based system, you would use syntax similar to the following:
ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0
For example:
# zpool detach rpool c1t10d0s0
SPARC - Use the eeprom command or the setenv command from the SPARC boot PROM.
x86 - Reconfigure the system BIOS.
You can create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to perform a recursive snapshot of the root pool.
The following procedure creates a recursive root pool snapshot and stores the snapshot as a file and as snapshots in a pool on a remote system. If a root pool fails, the remote dataset can be mounted by using NFS, and the snapshot file can be received into the re-created pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Oracle Solaris OS miniroot.
Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery. With either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.
In the following procedure, the system is booted from the zfsBE boot environment.
For example:
remote# zfs create rpool/snaps
For example:
remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps # share -@rpool/snaps /rpool/snaps sec=sys,rw=local-system,root=local-system ""
local# zfs snapshot -r rpool@snap1 local# zfs list -r rpool # NAME USED AVAIL REFER MOUNTPOINT rpool 15.1G 119G 106K /rpool rpool@snap1 0 - 106K - rpool/ROOT 5.00G 119G 31K legacy rpool/ROOT@snap1 0 - 31K - rpool/ROOT/zfsBE 5.00G 119G 5.00G / rpool/ROOT/zfsBE@snap1 0 - 5.00G - rpool/dump 2.00G 120G 1.00G - rpool/dump@snap1 0 - 1.00G - rpool/export 63K 119G 32K /export rpool/export@snap1 0 - 32K - rpool/export/home 31K 119G 31K /export/home rpool/export/home@snap1 0 - 31K - rpool/swap 8.13G 123G 4.00G - rpool/swap@snap1 0 - 4.00G -
For example, to send the root pool snapshots to a remote pool as a file, use syntax similar to the following:
local# zfs send -Rv rpool@snap1 > /net/remote-system/rpool/snaps/rpool.snap1 sending from @ to rpool@snap1 sending from @ to rpool/ROOT@snap1 sending from @ to rpool/ROOT/s10zfsBE@snap1 sending from @ to rpool/dump@snap1 sending from @ to rpool/export@snap1 sending from @ to rpool/export/home@snap1 sending from @ to rpool/swap@snap1
local# zfs send -Rv rpool@snap1 > /net/remote-system/rpool/snaps/rpool.snap1 sending from @ to rpool@snap1 sending from @ to rpool/export@snap1 sending from @ to rpool/export/home@snap1 sending from @ to rpool/ROOT@snap1 sending from @ to rpool/ROOT/zfsBE@snap1 sending from @ to rpool/dump@snap1 sending from @ to rpool/swap@snap1
To send the root pool snapshots to a remote pool as snapshots, use syntax similar to the following:
local# zfs send -Rv rpool@snap1 | ssh remote-system zfs receive -Fd -o canmount=off tank/snaps sending from @ to rpool@snap1 sending from @ to rpool/export@snap1 sending from @ to rpool/export/home@snap1 sending from @ to rpool/ROOT@snap1 sending from @ to rpool/ROOT/zfsBE@snap1 sending from @ to rpool/dump@snap1 sending from @ to rpool/swap@snap1
In this procedure, assume the following conditions:
The ZFS root pool cannot be recovered.
The ZFS root pool snapshots are stored on a remote system and are shared over NFS.
All the steps are performed on the local system.
SPARC - Select one of the following boot methods:
ok boot net -s ok boot cdrom -s
If you don't use -s option, you'll need to exit the installation program.
x86 – Select the option for booting from the DVD or the network. Then, exit the installation program.
For example:
# mount -F nfs remote-system:/rpool/snaps /mnt
If your network services are not configured, you might need to specify the remote-system's IP address.
For more information about relabeling the disk, see these references:
For example:
# zpool create -f -o failmode=continue -R /a -m legacy -o cachefile= /etc/zfs/zpool.cache rpool c1t1d0s0
This step might take some time. For example:
# cat /mnt/rpool.snap1 | zfs receive -Fdu rpool
Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.
To restore the actual root pool snapshots that are stored in a pool on a remote system, use syntax similar to the following:
# ssh remote-system zfs send -Rb tank/snaps/rpool@snap1 | zfs receive -F rpool
For example:
# zfs list
For example:
# zpool set bootfs=rpool/ROOT/zfsBE rpool
SPARC:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t1d0s0
x86:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t1d0s0
# init 6
This procedure assumes that existing root pool snapshots are available. In the example, they are available on the local system.
# zfs snapshot -r rpool@snap1 # zfs list -r rpool NAME USED AVAIL REFER MOUNTPOINT rpool 7.84G 59.1G 109K /rpool rpool@snap1 21K - 106K - rpool/ROOT 4.78G 59.1G 31K legacy rpool/ROOT@snap1 0 - 31K - rpool/ROOT/s10zfsBE 4.78G 59.1G 4.76G / rpool/ROOT/s10zfsBE@snap1 15.6M - 4.75G - rpool/dump 1.00G 59.1G 1.00G - rpool/dump@snap1 16K - 1.00G - rpool/export 99K 59.1G 32K /export rpool/export@snap1 18K - 32K - rpool/export/home 49K 59.1G 31K /export/home rpool/export/home@snap1 18K - 31K - rpool/swap 2.06G 61.2G 16K - rpool/swap@snap1 0 - 16K -
ok boot -F failsafe ROOT/zfsBE was found on rpool. Do you wish to have it mounted read-write on /a? [y,n,?] y mounting rpool on /a Starting shell.
# zfs rollback rpool@snap1 # zfs rollback rpool/ROOT@snap1 # zfs rollback rpool/ROOT/s10zfsBE@snap1
# init 6