Sun Cluster System Administration Guide for Solaris OS

Restoring Cluster Files

The ufsrestore(1M) command copies files to disk, relative to the current working directory, from backups created by using the ufsdump(1M) command. You can use ufsrestore to reload an entire file system hierarchy from a level 0 dump and incremental dumps that follow it, or to restore one or more single files from any dump tape. If ufsrestore is run as superuser or assumed an equivalent role, files are restored with their original owner, last modification time, and mode (permissions).

Before you start to restore files or file systems, you need to know the following information.

Table 11–2 Task Map: Restoring Cluster Files

Task 

Instructions 

For Solaris Volume Manager, restore files interactively  

How to Restore Individual Files Interactively (Solaris Volume Manager)

For Solaris Volume Manager, restore the root (/) file system

How to Restore the Root (/) File System (Solaris Volume Manager)

  

How to Restore a Root (/) File System That Was on a Solstice DiskSuite Metadevice or Solaris Volume Manager Volume

For Veritas Volume Manager, restore a root ( /) file system

SPARC: How to Restore a Nonencapsulated Root (/) File System (Veritas Volume Manager)

For Veritas Volume Manager, restore an encapsulated root ( /) file system

SPARC: How to Restore an Encapsulated Root (/) File System (Veritas Volume Manager)

ProcedureHow to Restore Individual Files Interactively (Solaris Volume Manager)

Use this procedure to restore one or more individual files. Ensure that the cluster is running without errors before performing the restore procedure.

  1. Become superuser or assume a role that provides solaris.cluster.admin RBAC authorization on the cluster node you are restoring.

  2. Stop all the data services that are using the files to be restored.


    # clresourcegroup offline resource-group
    
  3. Restore the files.


    # ufsrestore
    

ProcedureHow to Restore the Root (/) File System (Solaris Volume Manager)

Use this procedure to restore the root (/) file systems to a new disk, such as after replacing a bad root disk. The node being restored should not be booted. Ensure that the cluster is running without errors before performing the restore procedure.


Note –

Because you must partition the new disk by using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and re-create file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on a cluster node with access to the disk sets to which the node to be restored is also attached.

    Use a node other than the node that you are restoring.

  2. Remove the hostname of the node being restored from all metasets.

    Run this command from a node in the metaset other than the node that you are removing. Because the recovering node is offline, the system will display an RPC: Rpcbind failure - RPC: Timed out error. Ignore this error and continue to the next step.


    # metaset -s setname -f -d -h nodelist
    
    -s setname

    Specifies the disk set name.

    -f

    Deletes the last host from the disk set.

    -d

    Deletes from the disk set.

    -h nodelist

    Specifies the name of the node to delete from the disk set.

  3. Restore the root (/) and /usr file systems.

    To restore the root and /usr file systems, follow the procedure in Chapter 26, Restoring UFS Files and File Systems (Tasks), in System Administration Guide: Devices and File Systems. Omit the step in the Solaris OS procedure to reboot the system.


    Note –

    Ensure that you create the /global/.devices/node@nodeid file system.


  4. Reboot the node in multiuser mode.


    # reboot
    
  5. Replace the disk ID.


    # cldevice repair rootdisk
    
  6. Use the metadb(1M) command to re-create the state database replicas.


    # metadb -c copies -af raw-disk-device
    
    -c copies

    Specifies the number of replicas to create.

    -f raw-disk-device

    Raw disk device on which to create replicas.

    -a

    Adds replicas.

  7. From a cluster node other than the restored node add the restored node to all disk sets.


    phys-schost-2# metaset -s setname -a -h nodelist
    
    -a

    Creates and adds the host to the disk set.

    The node is rebooted into cluster mode. The cluster is ready to use.


Example 11–6 Restoring the Root (/) File System (Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the disk set schost-1. All other commands are run from phys-schost-1 . A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4 .


[Become superuser or assume a  role that provides solaris.cluster.modify RBAC authorization on a cluster node other than the node to be restored
.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]
Restore the root (/) and /usr file system using the procedure in the Solaris system administration documentation
 [Reboot:]
# reboot
[Replace the disk ID:]
# cldevice repair /dev/dsk/c0t0d0
[Re-create state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

ProcedureHow to Restore a Root (/) File System That Was on a Solstice DiskSuite Metadevice or Solaris Volume Manager Volume

Use this procedure to restore a root (/) file system that was on a Solstice DiskSuite metadevice or a Solaris Volume Manager volume when the backups were performed. Perform this procedure under circumstances such as when a root disk is corrupted and replaced with a new disk. The node being restored should not be booted. Ensure that the cluster is running without errors before performing the restore procedure.


Note –

Because you must partition the new disk by using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and re-create file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume a role that provides solaris.cluster.modifiy RBAC authorization on a cluster node with access to the disk set, other than the node you restoring.

    Use a node other than the node that you are restoring.

  2. Remove the hostname of the node being restored from all disk sets.


    # metaset -s setname -f -d -h nodelist
    
    -s setname

    Specifies the metaset name.

    -f

    Deletes the last host from the disk set.

    -d

    Deletes from the metaset.

    -h nodelist

    Specifies the name of the node to delete from the metaset.

  3. Replace the failed disk on the node on which the root (/) file system will be restored.

    Refer to disk replacement procedures in the documentation that shipped with your server.

  4. Boot the node that you are restoring.

    • If you are using the Solaris OS CD, note the following:

      • SPARC: Type:


        ok boot cdrom -s
        
      • x86:Insert the CD into the system's CD drive and boot the system by shutting it down and then turning it off and on. In the Current Boot Parameters screen, type b or i.


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
        
    • If you are using a Solaris JumpStartTM server, note the following:

      • SPARC: Type:


        ok boot net -s
        
      • x86:Insert the CD into the system's CD drive and boot the system by shutting it down and then turning it off and on. In the Current Boot Parameters screen, type b or i.


                             <<< Current Boot Parameters >>>
        Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@
        7,1/sd@0,0:a
        Boot args:
        
        Type b [file-name] [boot-flags] <ENTER> to boot with options
        or   i <ENTER>                          to enter boot interpreter
        or   <ENTER>                            to boot with defaults
        
                         <<< timeout in 5 seconds >>>
        Select (b)oot or (i)nterpreter: b -s
        
  5. Create all the partitions and swap space on the root disk by using the format command.

    Re-create the original partitioning scheme that was on the failed disk.

  6. Create the root (/) file system and other file systems as appropriate, by using the newfs command

    Re-create the original file systems that were on the failed disk.


    Note –

    Ensure that you create the /global/.devices/node@nodeid file system.


  7. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    
  8. Use the following commands to restore the root (/) file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    
  9. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk 
    raw-disk-device
    
  10. Remove the lines in the /temp-mountpoint/etc/system file for MDD root information.


    * Begin MDD root info (do not edit)
    forceload: misc/md_trans
    forceload: misc/md_raid
    forceload: misc/md_mirror
    forceload: misc/md_hotspares
    forceload: misc/md_stripe
    forceload: drv/pcipsy
    forceload: drv/glm
    forceload: drv/sd
    rootdev:/pseudo/md@0:0,10,blk
    * End MDD root info (do not edit)
  11. Edit the /temp-mountpoint/etc/vfstab file to change the root entry from a Solstice DiskSuite metadevice or a Solaris Volume Manager volume to a corresponding normal slice for each file system on the root disk that is part of the metadevice or volume.


    Example: 
    Change from—
    /dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /      ufs   1     no       -
  12. Unmount the temporary file system, and check the raw disk device.


    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    
  13. Reboot the node in multiuser mode.


    # reboot
    
  14. Replace the disk ID.


    # cldevice repair rootdisk
    
  15. Use the metadb command to re-create the state database replicas.


    # metadb -c copies -af raw-disk-device
    
    -c copies

    Specifies the number of replicas to create.

    -af raw-disk-device

    Creates initial state database replicas on the named raw disk device.

  16. From a cluster node other than the restored node, add the restored node to all disk sets.


    phys-schost-2# metaset -s setname -a -h nodelist
    
    -a

    Adds (creates) the metaset.

    Set up the metadevice or volume/mirror for root ( /) according to the Solstice DiskSuite documentation.

    The node is rebooted into cluster mode. The cluster is ready to use.


Example 11–7 Restoring a Root (/) File System That Was on a Solstice DiskSuite Metadevice or Solaris Volume Manager Volume

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the metaset schost-1. All other commands are run from phys-schost-1 . A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4 .


[Become superuser or assume a role that provides solaris.cluster.modify RBAC authorization on a cluster node with access to the metaset, other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]

Boot the node from the Solaris OS CD:


[Use format and newfs to recreate partitions and file systems
.]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

[Remove the lines in / temp-mountpoint/etc/system file for MDD root information:
]
* Begin MDD root info (do not edit)
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_mirror
forceload: misc/md_hotspares
forceload: misc/md_stripe
forceload: drv/pcipsy
forceload: drv/glm
forceload: drv/sd
rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
[Edit the /temp-mountpoint/etc/vfstab file]
Example: 
Change from—
/dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /usr   ufs   1     no       -
[Unmount the temporary file system and check the raw disk device:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Reboot:]
# reboot
[Replace the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0
[Re-create state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

ProcedureSPARC: How to Restore a Nonencapsulated Root (/) File System (Veritas Volume Manager)

Use this procedure to restore a nonencapsulated root (/) file system to a node. The node being restored should not be booted. Ensure the cluster is running without errors before performing the restore procedure.


Note –

Because you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and re-create file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that shipped with your server.

  2. Boot the node that you are restoring.

    • If you are using the Solaris OS CD, at the OpenBoot PROM ok prompt, type the following command:


      ok boot cdrom -s
      
    • If you are using a Solaris JumpStart server, at the OpenBoot PROM ok prompt, type the following command:


      ok boot net -s
      
  3. Create all the partitions and swap on the root disk by using the format command.

    Re-create the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, using the newfs command.

    Re-create the original file systems that were on the failed disk.


    Note –

    Ensure that you create the /global/.devices/node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    
  6. Restore the root (/) file system from backup, and unmount and check the file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

    The file system is now restored.

  7. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    
  8. Reboot the node in multiuser mode.


    # reboot
    
  9. Update the disk ID.


    # cldevice repair /dev/rdsk/disk-device
    
  10. Press Control-d to resume in multiuser mode.

    The node reboots into cluster mode. The cluster is ready to use.


Example 11–8 SPARC: Restoring a Nonencapsulated Root (/) File System (Veritas Volume Manager)

The following example shows a nonencapsulated root (/) file system that is restored to the node phys-schost-1 from the tape device /dev/rmt/0.


[Replace the failed disk and boot the node:]

Boot the node from the Solaris OS CD. At the OpenBoot PROM ok prompt, type the following command:


ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0

[Reboot:]
# reboot
[Update the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0

ProcedureSPARC: How to Restore an Encapsulated Root (/) File System (Veritas Volume Manager)

Use this procedure to restore an encapsulated root (/) file system to a node. The node being restored should not be booted. Ensure the cluster is running with errors before performing the restore procedure.


Note –

Because you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and re-create file systems as appropriate.


The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that shipped with your server.

  2. Boot the node that you are restoring.

    • If you are using the Solaris OS CD, at the OpenBoot PROM ok prompt, type the following command:


      ok boot cdrom -s
      
    • If you are using a Solaris JumpStart server, at the OpenBoot PROM ok prompt, type the following command:


      ok boot net -s
      
  3. Create all the partitions and swap space on the root disk by using the format command.

    Re-create the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, by using the newfs command.

    Re-create the original file systems that were on the failed disk.


    Note –

    Ensure that you create the /global/.devices/ node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    
  6. Restore the root (/) file system from backup.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    
  7. Create an empty install-db file.

    This file puts the node in VxVM installation mode at the next reboot.


    # touch \
    /temp-mountpoint/etc/vx/reconfig.d/state.d/install-db
    
  8. Remove the following entries from the / temp-mountpoint/etc/system file.


    * rootdev:/pseudo/vxio@0:0
    * set vxio:vol_rootdev_is_volume=1
  9. Edit the /temp-mountpoint /etc/vfstab file and replace all VxVM mount points with the standard disk devices for the root disk, such as /dev/dsk/c0t0d0s0.


    Example: 
    Change from—
    /dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol /      ufs   1     no -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -
  10. Unmount the temporary file system and check the file system.


    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    
  11. Install the boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    
  12. Reboot the node in multiuser mode.


    # reboot
    
  13. Update the disk ID by using scdidadm(1M).


    # cldevice repair /dev/rdsk/c0t0d0
    
  14. Run the vxinstall command to encapsulate the disk and reboot.

  15. If a conflict in minor number occurs with any other system, unmount the global devices and re-minor the disk group.

    • Unmount the global devices file system on the cluster node.


      # umount /global/.devices/node@nodeid
      
    • Re-minor the rootdg disk group on the cluster node.


      # vxdg reminor rootdg 100
      
  16. Shut down and reboot the node in cluster mode.


    # shutdown -g0 -i6 -y
    

Example 11–9 SPARC: Restoring an Encapsulated root (/) File System (Veritas Volume Manager)

The following example shows an encapsulated root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0.


[Replace the failed disk and boot the node:]

Boot the node from the Solaris OS CD. At the OpenBoot PROM ok prompt, type the following command:


ok boot cdrom -s
...
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Create an empty install-db file:]
# touch /a/etc/vx/reconfig.d/state.d/install-db
[Edit /etc/system on the temporary file system and 
remove or comment out the following entries:]
	# rootdev:/pseudo/vxio@0:0
	# set vxio:vol_rootdev_is_volume=1
[Edit /etc/vfstab on the temporary file system:]
Example: 
Change from—
/dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol / ufs 1 no-

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -
[Unmount the temporary file system, then check the file system:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot:]
# reboot
[Update the disk ID:]
# cldevice repair /dev/rdsk/c0t0d0
[Encapsulate the disk::]
# vxinstall
Choose to encapsulate the root disk.
[If a conflict  in minor number occurs, reminor the rootdg disk group:]
# umount /global/.devices/node@nodeid
# vxdg reminor rootdg 100
# shutdown -g0 -i6 -y

See Also

For instructions about how to mirror the encapsulated root disk, see the Sun Cluster Software Installation Guide for Solaris OS.