Use this procedure to restore a root (/) file system that was on a metadevice when the backups were performed. Perform this procedure under circumstances such as when a root disk is corrupted and replaced with a new disk. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.
Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.
Become superuser on a cluster node with access to the metaset, other than the node you want to restore.
Remove from all metasets the hostname of the node being restored.
# metaset -s setname -f -d -h nodelist |
Specifies the metaset name.
Force.
Deletes from the metaset.
Specifies the name of the node to delete from the metaset.
Replace the failed disk on the node on which the root (/) file system will be restored.
Refer to disk replacement procedures in the documentation that came with your server.
Boot the node being restored.
If using the Solaris CD-ROM, run the following command:
ok boot cdrom -s |
If using a JumpStart server, run the following command:
ok boot net -s |
Create all the partitions and swap on the root disk using the format(1M) command.
Recreate the original partitioning scheme that was on the failed disk.
Create the root (/) file system and other file systems as appropriate, using the newfs(1M) command
Recreate the original file systems that were on the failed disk.
Be sure to create the /global/.devices/node@nodeid file system.
Mount the root (/) file system on a temporary mount point.
# mount device temp-mountpoint |
Use the following commands to restore the root (/) file system.
# cd temp-mountpoint # ufsrestore rvf dump-device # rm restoresymtable |
Install a new boot block on the new disk.
# /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device |
Remove the lines in the /temp-mountpoint/etc/system file for MDD root information.
* Begin MDD root info (do not edit) forceload: misc/md_trans forceload: misc/md_raid forceload: misc/md_mirror forceload: misc/md_hotspares forceload: misc/md_stripe forceload: drv/pcipsy forceload: drv/glm forceload: drv/sd rootdev:/pseudo/md@0:0,10,blk * End MDD root info (do not edit) |
Edit the /temp-mountpoint/etc/vfstab file to change the root entry from a metadevice to a corresponding normal slice for each file system on the root disk that is part of the metadevice.
Example: Change from— /dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no - Change to— /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 /usr ufs 1 no - |
Unmount the temporary file system, and check the raw disk device.
# cd / # umount temp-mountpoint # fsck raw-disk-device |
Reboot the node in single-user mode.
# reboot -- "-s" |
Replace the disk ID using the scdidadm command.
# scdidadm -R rootdisk |
Use the metadb(1M) command to recreate the state database replicas.
# metadb -c copies -af raw-disk-device |
Specifies the number of replicas to create.
Creates initial state database replicas on the named raw disk device.
Reboot the node in cluster mode.
From a cluster node other than the restored node, use the metaset(1M) command to add the restored node to all metasets.
phys-schost-2# metaset -s setname -a -h nodelist |
Adds (creates) the metaset.
Set up the metadevice/mirror for root (/) according to the Solstice DiskSuite documentation.
The node is rebooted into cluster mode. The cluster is ready to use.
The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the metaset schost-1. All other commands are run from phys-schost-1. A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4.
[Become superuser on a cluster node with access to the metaset, other than the node to be restored.] [Remove the node from the metaset:] phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1 [Replace the failed disk and boot the node:] ok boot cdrom -s [Use format and newfs to recreate partitions and file systems.] [Mount the root file system on a temporary mount point:] # mount /dev/dsk/c0t0d0s0 /a [Restore the root file system:] # cd /a # ufsrestore rvf /dev/rmt/0 # rm restoresymtable [Install a new boot block:] # /usr/sbin/installboot /usr/platform/`uname \ -i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0 [Remove the lines in /temp-mountpoint/etc/system file for MDD root information:] * Begin MDD root info (do not edit) forceload: misc/md_trans forceload: misc/md_raid forceload: misc/md_mirror forceload: misc/md_hotspares forceload: misc/md_stripe forceload: drv/pcipsy forceload: drv/glm forceload: drv/sd rootdev:/pseudo/md@0:0,10,blk * End MDD root info (do not edit) [Edit the /temp-mountpoint/etc/vfstabfile] Example: Change from— /dev/md/dsk/d10 /dev/md/rdsk/d10 / ufs 1 no - Change to— /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0 /usr ufs 1 no - [Unmount the temporary file system and check the raw disk device:] # cd / # umount /a # fsck /dev/rdsk/c0t0d0s0 [Reboot in single-user mode:] # reboot -- "-s" [Replace the disk ID:] # scdidadm -R /dev/dsk/c0t0d0 [Recreate state database replicas:] # metadb -c 3 -af /dev/rdsk/c0t0d0s4 # reboot Type CTRL-d to boot into multiuser mode. [Add the node back to the metaset:] phys-schost-2# metaset -s schost-1 -a -h phys-schost-1 |