Sun Cluster System Administration Guide for Solaris OS

Restoring Cluster Files

Table 9–2 Task Map: Restoring Cluster Files

Task 

For Instructions, Go To… 

For Solstice DiskSuite/Solaris Volume Manager, restore files interactively following Solaris restore procedures 

How to Restore Individual Files Interactively (Solstice DiskSuite/Solaris Volume Manager)

For Solstice DiskSuite/Solaris Volume Manager, restore the root (/) file system

How to Restore the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

  

How to Restore a root (/) File System That Was on a Metadevice (Solstice DiskSuite/Solaris Volume Manager)

For VERITAS Volume Manager, restore a non-encapsulated root (/) file system

SPARC: How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)

For VERITAS Volume Manager, restore an encapsulated root (/) file system

SPARC: How to Restore an Encapsulated root (/) File System (VERITAS Volume Manager)

How to Restore Individual Files Interactively (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore one or more individual files. Be sure the cluster is running problem-free before performing the restore procedure.

  1. Become superuser on the cluster node you want to restore.

  2. Stop all the data services that are using the files to be restored.


    # scswitch -z -g resource-group[,...] -h ““
    

  3. Restore the files using the ufsrestore command.

How to Restore the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore the root (/) file systems to a new disk, such as after replacing a bad root disk. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Become superuser on a cluster node with access to the metaset, other than the node you want to restore.

  2. Remove from all metasets the hostname of the node being restored.

    Run this command from a node in the metaset other than the node you are removing.


    # metaset -s setname -f -d -h nodelist
    

    -s setname

    Specifies the diskset name.

    -f

    Force.

    -d

    Deletes from the diskset.

    -h nodelist

    Specifies the name of the node to delete from the diskset.

  3. Replace the failed disk on the node on which the root (/) file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  4. Boot the node that you want to restore.

    • If you are using the Solaris CD:

      • SPARC: At the OpenBoot PROM ok prompt, type the following command:


        ok boot cdrom -s
        

      • x86: Insert the CD into the system's CD drive and boot the system by shutting it down and then turning it off and on. On the Current Boot Parameters screen, type the following command:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
        

    • If you are using a Solaris JumpStartTM server:

      • SPARC: At the OpenBoot PROM ok prompt, type the following command:


        ok boot net -s
        

      • x86: Boot the system by shutting it down and then turning it off and on. On the Current Boot Parameters screen, type the following command:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
        

  5. Create all the partitions and swap on the root disk using the format(1M) command.

    Recreate the original partitioning scheme that was on the failed disk.

  6. Create the root (/) file system and other file systems as appropriate, using the newfs(1M) command.

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  7. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  8. Use the following commands to restore the root (/) file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

    The file system is now restored.

  9. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  10. Reboot the node in single-user mode.


    # reboot -- "-s"
    

  11. Replace the disk ID using the scdidadm(1M) command.


    # scdidadm -R rootdisk
    

  12. Use the metadb(1M) command to recreate the state database replicas.


    # metadb -c copies -af raw-disk-device
    

    -c copies

    Specifies the number of replicas to create.

    -f raw-disk-device

    Raw disk device on which to create replicas.

    -a

    Adds replicas.

  13. Reboot the node in cluster mode.

    1. Start the reboot.


      # reboot
      

      During this boot you might see an error or warning message, ending with the following instruction:


      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):

    2. Press Control-d to boot into multiuser mode.

  14. From a cluster node other than the restored node, use the metaset command to add the restored node to all metasets.


    phys-schost-2# metaset -s setname -a -h nodelist
    

    -a

    Creates and adds the host to the diskset.

    The node is rebooted into cluster mode. The cluster is ready to use.

Example—Restoring the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the diskset schost-1. All other commands are run from phys-schost-1. A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4.


[Become superuser on a cluster node other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]

Boot the node from the Solaris CD:


[Use format and newfs to recreate partitions and file systems.]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Replace the disk ID:]
# scdidadm -R /dev/dsk/c0t0d0
[Recreate state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
# reboot
Press Control-d to boot into multiuser mode.
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

How to Restore a root (/) File System That Was on a Metadevice (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore a root (/) file system that was on a metadevice when the backups were performed. Perform this procedure under circumstances such as when a root disk is corrupted and replaced with a new disk. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Become superuser on a cluster node with access to the metaset, other than the node you want to restore.

  2. Remove from all metasets the hostname of the node being restored.


    # metaset -s setname -f -d -h nodelist
    

    -s setname

    Specifies the metaset name.

    -f

    Force.

    -d

    Deletes from the metaset.

    -h nodelist

    Specifies the name of the node to delete from the metaset.

  3. Replace the failed disk on the node on which the root (/) file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  4. Boot the node that you want to restore.

    • If you are using the Solaris CD:

      • SPARC: At the OpenBoot PROM ok prompt, type the following command:


        ok boot cdrom -s
        

      • x86: Insert the CD into the system's CD drive and boot the system by shutting it down and then turning it off and on. On the Current Boot Parameters screen, type the following command:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
        

    • If you are using a Solaris JumpStartTM server:

      • SPARC: At the OpenBoot PROM ok prompt, type the following command:


        ok boot net -s
        

      • x86: Boot the system by shutting it down and then turning it off and on. On the Current Boot Parameters screen, type the following command:


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
        

  5. Create all the partitions and swap on the root disk using the format command.

    Recreate the original partitioning scheme that was on the failed disk.

  6. Create the root (/) file system and other file systems as appropriate, using the newfs command

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  7. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  8. Use the following commands to restore the root (/) file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    

  9. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  10. Remove the lines in the /temp-mountpoint/etc/system file for MDD root information.


    * Begin MDD root info (do not edit)
    forceload: misc/md_trans
    forceload: misc/md_raid
    forceload: misc/md_mirror
    forceload: misc/md_hotspares
    forceload: misc/md_stripe
    forceload: drv/pcipsy
    forceload: drv/glm
    forceload: drv/sd
    rootdev:/pseudo/md@0:0,10,blk
    * End MDD root info (do not edit)

  11. Edit the /temp-mountpoint/etc/vfstab file to change the root entry from a metadevice to a corresponding normal slice for each file system on the root disk that is part of the metadevice.


    Example: 
    Change from—
    /dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /      ufs   1     no       -

  12. Unmount the temporary file system, and check the raw disk device.


    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

  13. Reboot the node in single-user mode.


    # reboot -- "-s"
    

  14. Replace the disk ID using the scdidadm command.


    # scdidadm -R rootdisk
    

  15. Use the metadb command to recreate the state database replicas.


    # metadb -c copies -af raw-disk-device
    

    -c copies

    Specifies the number of replicas to create.

    -af raw-disk-device

    Creates initial state database replicas on the named raw disk device.

  16. Reboot the node in cluster mode.

    1. Start the reboot.


      # reboot
      

      During this boot you will see error or warning messages, ending with the following instruction:


      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):

    2. Press Control-d to boot into multiuser mode.

  17. From a cluster node other than the restored node, use the metaset command to add the restored node to all metasets.


    phys-schost-2# metaset -s setname -a -h nodelist
    

    -a

    Adds (creates) the metaset.

    Set up the metadevice/mirror for root (/) according to the Solstice DiskSuite documentation.

    The node is rebooted into cluster mode. The cluster is ready to use.

Example—Restoring a root (/) File System That Was on a Metadevice (Solstice DiskSuite/Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the metaset schost-1. All other commands are run from phys-schost-1. A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4.


[Become superuser on a cluster node with access to the metaset, 
other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]

Boot the node from the Solaris CD:


[Use format and newfs to recreate partitions and file systems.]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Remove the lines in /temp-mountpoint/etc/system file for MDD root information:]
* Begin MDD root info (do not edit)
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_mirror
forceload: misc/md_hotspares
forceload: misc/md_stripe
forceload: drv/pcipsy
forceload: drv/glm
forceload: drv/sd
rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
[Edit the /temp-mountpoint/etc/vfstabfile]
Example: 
Change from—
/dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /usr   ufs   1     no       -
[Unmount the temporary file system and check the raw disk device:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Replace the disk ID:]
# scdidadm -R /dev/dsk/c0t0d0
[Recreate state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
# reboot
Type Control-d to boot into multiuser mode.
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

SPARC: How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)

Use this procedure to restore a non-encapsulated root (/) file system to a node. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  2. Boot the node that you want to restore.

    • If you are using the Solaris CD, at the OpenBoot PROM ok prompt, type the following command:


      ok boot cdrom -s
      

    • If you are using a Solaris JumpStartTM server, at the OpenBoot PROM ok prompt, type the following command:


      ok boot net -s
      

  3. Create all the partitions and swap on the root disk using the format command.

    Recreate the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, using the newfs command.

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  6. Restore the root (/) file system from backup, and unmount and check the file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

    The file system is now restored.

  7. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  8. Reboot the node in single-user mode.

    1. Start the reboot.


      # reboot -- "-s"
      

      During this boot you will see error or warning messages, ending with the following instruction:


      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):

    2. Type the root password.

  9. Update the disk ID using the scdidadm command.


    # scdidadm -R /dev/rdsk/disk-device
    

  10. Press Control-d to resume in multiuser mode.

    The node reboots into cluster mode. The cluster is ready to use.

SPARC: Example—Restoring a Non-Encapsulated root (/) File System (VERITAS Volume Manager)

The following example shows a non-encapsulated root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0.


[Replace the failed disk and boot the node:]

Boot the node from the Solaris CD. At the OpenBoot PROM ok prompt, type the following command:


ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Update the disk ID:]
# scdidadm -R /dev/rdsk/c0t0d0
[Press Control-d to resume in multiuser mode]

SPARC: How to Restore an Encapsulated root (/) File System (VERITAS Volume Manager)

Use this procedure to restore an encapsulated root (/) file system to a node. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  2. Boot the node that you want to restore.

    • If you are using the Solaris CD, at the OpenBoot PROM ok prompt, type the following command:


      ok boot cdrom -s
      

    • If you are using a Solaris JumpStartTM server, at the OpenBoot PROM ok prompt, type the following command:


      ok boot net -s
      

  3. Create all the partitions and swap on the root disk using the format command.

    Recreate the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, using the newfs command.

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  6. Restore the root (/) file system from backup.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    

  7. Create an empty install-db file.

    This puts the node in VxVM install mode at the next reboot.


    # touch /temp-mountpoint/etc/vx/reconfig.d/state.d/install-db
    

  8. Remove or comment out the following entries from the /temp-mountpoint/etc/system file.


    * rootdev:/pseudo/vxio@0:0
    * set vxio:vol_rootdev_is_volume=1

  9. Edit the /temp-mountpoint/etc/vfstab file and replace all VxVM mount points with the standard disk devices for the root disk, such as /dev/dsk/c0t0d0s0.


    Example: 
    Change from—
    /dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol /      ufs   1     no -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -

  10. Unmount the temporary file system and check the file system.


    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

  11. Install the boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  12. Reboot the node in single-user mode.


    # reboot -- "-s"
    

  13. Update the disk ID using scdidadm(1M).


    # scdidadm -R /dev/rdsk/c0t0d0
    

  14. Run vxinstall to encapsulate the disk and reboot.


    # vxinstall
    

  15. If there is a conflict in minor number with any other system, unmount the global devices and reminor the disk group.

    • Unmount the global devices file system on the cluster node.


      # umount /global/.devices/node@nodeid
      

    • Reminor the rootdg disk group on the cluster node.


      # vxdg reminor rootdg 100
      

  16. Shut down and reboot the node in cluster mode.


    # shutdown -g0 -i6 -y
    

SPARC: Example—Restoring an Encapsulated root (/) File System (VERITAS Volume Manager)

The following example shows an encapsulated root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0.


[Replace the failed disk and boot the node:]

Boot the node from the Solaris CD. At the OpenBoot PROM ok prompt, type the following command:


ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Create an empty install-db file:]
# touch /a/etc/vx/reconfig.d/state.d/install-db
[Edit /etc/system on the temporary file system and 
remove or comment out the following entries:]
	# rootdev:/pseudo/vxio@0:0
	# set vxio:vol_rootdev_is_volume=1
[Edit /etc/vfstab on the temporary file system:]
Example: 
Change from—
/dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol / ufs 1 no-

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -
[Unmount the temporary file system, then check the file system:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Update the disk ID:]
# scdidadm -R /dev/rdsk/c0t0d0
[Run vxinstall:]
# vxinstall
Choose to encapsulate the root disk.
[If there is a conflict in minor number, reminor the rootdg disk group:]
# umount /global/.devices/node@nodeid
# vxdg reminor rootdg 100
# shutdown -g0 -i6 -y

Where to Go From Here

For instructions about how to mirror the encapsulated root disk, see the Sun Cluster Software Installation Guide for Solaris OS.