|Skip Navigation Links|
|Exit Print View|
|Oracle Solaris ZFS Administration Guide Oracle Solaris 11 Express 11/10|
The following sections describe how to perform the following tasks:
You might need to replace a disk in the root pool for the following reasons:
The root pool is too small and you want to replace it with a larger disk
The root pool disk is failing. In a non-redundant pool, if the disk is failing so that the system won't boot, you'll need to boot from an alternate media, such as a CD or the network, before you replace the root pool disk.
In a mirrored root pool configuration, you might be able to attempt a disk replacement without having to boot from alternate media. You can replace a failed disk by using the zpool replace command or if you have an additional disk, you can use the zpool attach command. See the steps below for an example of attaching an additional disk and detaching a root pool disk.
Some hardware requires that you offline and unconfigure a disk before attempting the zpool replace operation to replace a failed disk. For example:
# zpool offline rpool c1t0d0s0 # cfgadm -c unconfigure c1::dsk/c1t0d0 <Physically remove failed disk c1t0d0> <Physically insert replacement disk c1t0d0> # cfgadm -c configure c1::dsk/c1t0d0 <Confirm that the new disk has an SMI label and a slice 0> # zpool replace rpool c1t0d0s0 # zpool online rpool c1t0d0s0 # zpool status rpool <Let disk resilver before installing the boot blocks> SPARC# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0 x86# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t9d0s0
On some hardware, you do not have to online or reconfigure the replacement disk after it is inserted.
Identify the boot device pathnames of the current and new disk so that you can test booting from the replacement disk and also manually boot from the existing disk, if necessary, if the replacement disk fails. In the example below, the current root pool disk (c1t10d0s0) is:
In the example below, the replacement boot disk is (c1t9d0s0):
For information about relabeling a disk that is intended for the root pool, see the following site:
# zpool attach rpool c1t10d0s0 c1t9d0s0 Make sure to wait until resilver is done before rebooting.
# zpool status rpool state: ONLINE status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Fri Jan 14 13:35:45 2011 814M scanned out of 15.3G at 16.3M/s, 0h15m to go 813M resilvered, 5.18% done config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c1t3d0s0 ONLINE 0 0 0 c1t2d0s0 ONLINE 0 0 0 (resilvering) errors: No known data errors
For example, on a SPARC based system:
ok boot /pci@8,700000/pci@3/scsi@5/sd@9,0
# zpool detach rpool c1t10d0s0
Create root pool snapshots for recovery purposes. The best way to create root pool snapshots is to do a recursive snapshot of the root pool.
The procedure below creates a recursive root pool snapshot and stores the snapshot as a file in a pool on a remote system. In the case of a root pool failure, the remote dataset can be mounted by using NFS and the snapshot file received into the recreated pool. You can also store root pool snapshots as the actual snapshots in a pool on a remote system. Sending and receiving the snapshots from a remote system is a bit more complicated because you must configure ssh or use rsh while the system to be repaired is booted from the Solaris OS miniroot.
For information about remotely storing and recovering root pool snapshots and the most up-to-date information about root pool recovery, go to this site:
Validating remotely stored snapshots as files or snapshots is an important step in root pool recovery and in either method, snapshots should be recreated on a routine basis, such as when the pool configuration changes or when the Solaris OS is upgraded.
In the following example, the system is booted from the zfsBE boot environment.
remote# zfs create rpool/snaps
remote# zfs set sharenfs='rw=local-system,root=local-system' rpool/snaps # share -@rpool/snaps /rpool/snaps sec=sys,rw=local-system,root=local-system ""
In this example, the system has two BEs, osolBE and osol2BE. The active BE is osolBE.
local# zpool set listsnapshots=on rpool local# zfs snapshot -r rpool@0311 local# zfs list -r rpool NAME USED AVAIL REFER MOUNTPOINT rpool 20.1G 114G 67K /rpool rpool@0311 0 - 67K - rpool/ROOT 4.00G 114G 21K legacy rpool/ROOT@0311 0 - 21K - rpool/ROOT/opensolaris 5.11M 114G 3.96G / rpool/ROOT/opensolaris@0311 0 - 3.96G - rpool/ROOT/osolBE 4.00G 114G 3.96G / rpool/ROOT/osolBE@install 30.9M - 3.89G - rpool/ROOT/osolBE@osolBE 2.97M - 3.96G - rpool/ROOT/osolBE@0311 0 - 3.96G - rpool/dump 7.94G 114G 7.94G - rpool/dump@0311 0 - 7.94G - rpool/export 69.5K 114G 23K /export rpool/export@0311 0 - 23K - rpool/export/home 46.5K 114G 23K /export/home rpool/export/home@0311 0 - 23K - rpool/export/home/admin 23.5K 114G 23.5K /export/home/admin rpool/export/home/admin@0311 0 - 23.5K - rpool/swap 8.20G 122G 14.7M - rpool/swap@0311 0 - 14.7M -
local# zfs send -Rv rpool@0311 > /net/remote-system/rpool/snaps/rpool.0311 sending from @ to rpool@0311 sending from @ to rpool/dump@0311 sending from @ to rpool/ROOT@0311 sending from @ to rpool/ROOT/osolBE@install sending from @install to rpool/ROOT/osolBE@osolBE sending from @osolBE to rpool/ROOT/osolBE@0311 sending from @ to rpool/ROOT/opensolaris@0311 sending from @ to rpool/swap@0311 sending from @ to rpool/export@0311 sending from @ to rpool/export/home@0311 sending from @ to rpool/export/home/admin@0311
In this scenario, assume the following conditions:
ZFS root pool cannot be recovered
ZFS root pool snapshots are stored on a remote system and are shared over NFS
The system is booted from an equivalent Solaris release to the root pool version so that the Solaris release and the pool version match. Otherwise, you will need to add the -o version=version-number property option and value when you recreate the root pool in step 4 below.
All steps below are performed on the local system.
On a SPARC based system, select one of the following boot methods:
ok boot net -s ok boot cdrom -s
If you don't use -s option, you'll need to exit the installation program.
On an x86 based system, select the option for booting from the DVD or the network. Then, exit the installation program.
# mount -F nfs remote-system:/rpool/snaps /mnt
If your network services are not configured, you might need to specify the remote-system's IP address.
For more information about relabeling the disk, go to the following site:
# zpool create -f -o failmode=continue -R /a -m legacy -o cachefile=/etc/zfs/zpool.cache rpool c1t0d0s0
This step might take some time. For example:
# cat /mnt/rpool.0311 | zfs receive -Fdu rpool
Using the -u option means that the restored archive is not mounted when the zfs receive operation completes.
# zfs mount rpool/ROOT/osolBE
# zfs mount -a rpool
Other BEs are not mounted since they have canmount=noauto, which suppresses mounting when the zfs mount -a operation is done.
# zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 20.1G 114G 67K /rpool rpool@0311 0 - 67K - rpool/ROOT 4.00G 114G 21K legacy rpool/ROOT@0311 0 - 21K - rpool/ROOT/opensolaris 5.11M 114G 3.96G / rpool/ROOT/opensolaris@0311 0 - 3.96G - rpool/ROOT/osolBE 4.00G 114G 3.96G / rpool/ROOT/osolBE@install 30.9M - 3.89G - rpool/ROOT/osolBE@osolBE 2.97M - 3.96G - rpool/ROOT/osolBE@0311 0 - 3.96G - rpool/dump 7.94G 114G 7.94G - rpool/dump@0311 0 - 7.94G - rpool/export 69.5K 114G 23K /export rpool/export@0311 0 - 23K - rpool/export/home 46.5K 114G 23K /export/home rpool/export/home@0311 0 - 23K - rpool/export/home/admin 23.5K 114G 23.5K /export/home/admin rpool/export/home/admin@0311 0 - 23.5K - rpool/swap 8.20G 122G 14.7M - rpool/swap@0311 0 - 14.7M -
# zpool set bootfs=rpool/ROOT/osolBE rpool
On a SPARC based system:
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c1t0d0s0
On an x86 based system:
# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0
# init 6