Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Restore the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore the root (/) file systems to a new disk, such as after replacing a bad root disk. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Because you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


Steps
  1. Become superuser or assume an equivalent role on a cluster node with access to the disksets to which the node to be restored is also attached.

    Become superuser or assume an equivalent role on a node other than the node you want to restore.

  2. Remove from all metasets the hostname of the node being restored.

    Run this command from a node in the metaset other than the node you are removing.


    # metaset -s setname -f -d -h nodelist
    
    -s setname

    Specifies the disk set name.

    -f

    Force.

    -d

    Deletes from the disk set.

    -h nodelist

    Specifies the name of the node to delete from the disk set.

  3. Restore the root (/) and /usr file systems.

    To restore the root and /usr file systems, following the procedure in Chapter 27, Restoring Files and File Systems (Tasks), in System Administration Guide: Devices and File Systems. Omit the step in the Solaris procedure to reboot the system.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  4. Reboot the node in multiuser mode.


    # reboot
    
  5. Replace the disk ID using the scdidadm(1M) command.


    # scdidadm -R rootdisk
    
  6. Use the metadb(1M) command to recreate the state database replicas.


    # metadb -c copies -af raw-disk-device
    
    -c copies

    Specifies the number of replicas to create.

    -f raw-disk-device

    Raw disk device on which to create replicas.

    -a

    Adds replicas.

  7. From a cluster node other than the restored node, use the metaset command to add the restored node to all disksets.


    phys-schost-2# metaset -s setname -a -h nodelist
    
    -a

    Creates and adds the host to the disk set.

    The node is rebooted into cluster mode. The cluster is ready to use.


Example 9–6 Restoring the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the disk set schost-1. All other commands are run from phys-schost-1 . A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4 .


[Become superuser or assume an equivalent role on a cluster node other than the node to be restored
.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]
Restore the root (/) and /usr file system using the procedure in the Solaris system administration documentation
 [Reboot:]
# reboot
[Replace the disk ID:]
# scdidadm -R /dev/dsk/c0t0d0
[Recreate state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1