Sun Cluster 3.1 10/03 System Administration Guide

Chapter 9 Backing Up and Restoring a Cluster

This is a list of step-by-step instructions in this chapter.

Backing Up a Cluster

Table 9–1 Task Map: Backing Up Cluster Files

Task 

For Instructions, Go To… 

Find the names of the file systems you want to back up 

How to Find File System Names to Back Up

Calculate how many tapes you will need to contain a full backup 

How to Determine the Number of Tapes Needed for a Full Backup

Back up the root file system 

How to Back Up the root (/) File System

Perform online backup for mirrored or plexed file systems 

How to Perform Online Backups for Mirrors (Solstice DiskSuite/Solaris Volume Manager)

How to Perform Online Backups for Volumes (VERITAS Volume Manager)

How to Find File System Names to Back Up

Use this procedure to determine the names of the file systems you want to back up.

  1. Display the contents of the /etc/vfstab file.

    You do not need to be superuser to run this command.


    % more /etc/vfstab
    

  2. Look in the mount point column for the name of the file system you want to back up.

    Use this name when you back up the file system.


    % more /etc/vfstab 
    

Example—Finding File System Names to Back Up

In the following example, the names of available file systems listed in the /etc/vfstab file are displayed.


% more /etc/vfstab
#device             device             mount  FS fsck  mount  mount
#to mount           to fsck            point  type     pass   at boot  options
#
#/dev/dsk/c1d0s2    /dev/rdsk/c1d0s2   /usr     ufs     1      yes      -
 f                  -                  /dev/fd  fd      -      no       -
 /proc              -                  /proc    proc    -      no       -
 /dev/dsk/c1t6d0s1  -                  -        swap    -      no       -
 /dev/dsk/c1t6d0s0  /dev/rdsk/c1t6d0s0 /        ufs     1      no       -
 /dev/dsk/c1t6d0s3  /dev/rdsk/c1t6d0s3 /cache   ufs     2      yes      -
 swap               -                  /tmp     tmpfs   -      yes      -

How to Determine the Number of Tapes Needed for a Full Backup

Use this procedure to calculate the number of tapes you will need to back up a file system.

  1. Become superuser on the cluster node you want to back up.

  2. Estimate the size of the backup in bytes.


    # ufsdump S filesystem 
    

    S

    Displays the estimated number of bytes needed to perform the backup.

    filesystem

    Specifies the name of the file system you want to back up.

  3. Divide the estimated size by the capacity of the tape to see how many tapes you need.

Example—Determining the Number of Tapes Needed

In the following example, the file system size of 905,881,620 bytes will easily fit on a 4 GB tape (905,881,620 ÷ 4,000,000,000).


# ufsdump S /global/phys-schost-1
905881620

How to Back Up the root (/) File System

Use this procedure to back up the root (/) file system of a cluster node. Be sure the cluster is running problem-free before performing the backup procedure.

  1. Become superuser on the cluster node you want to back up.

  2. Switch each running data service from the node to be backed up to another node in the cluster.


    # scswitch -z -D disk-device-group[,...] -h node[,...]
    

    -z

    Performs the switch.

    -D disk-device-group

    Name of the disk device group to be switched.

    -h node

    Name of the cluster node to switch the disk device group to. This node becomes the new primary.

  3. Stop the node.


    # shutdown -g0 -y -i0
    

  4. At the ok prompt, reboot in non-cluster mode.


    ok boot -x
    

  5. Back up the root (/) file system.

    • If the root disk is not encapsulated, use the following command.


      # ufsdump 0ucf dump-device /
      

    • If the root disk is encapsulated, use the following command.


      # ufsdump 0ucf dump-device /dev/vx/rdsk/rootvol
      

    Refer to the ufsdump(1M) man page for more information.

  6. Reboot the node in cluster mode.


    # init 6
    

Example—Backing Up the root (/) File System

In the following example, the root (/) file system is backed up onto tape device /dev/rmt/0.


# ufsdump 0ucf /dev/rmt/0 /
  DUMP: Writing 63 Kilobyte records
  DUMP: Date of this level 0 dump: Tue Apr 18 18:06:15 2000
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/rdsk/c0t0d0s0 (phys-schost-1:/) to /dev/rmt/0
  DUMP: Mapping (Pass I) [regular files]
  DUMP: Mapping (Pass II) [directories]
  DUMP: Estimated 859086 blocks (419.48MB).
  DUMP: Dumping (Pass III) [directories]
  DUMP: Dumping (Pass IV) [regular files]
  DUMP: 859066 blocks (419.47MB) on 1 volume at 2495 KB/sec
  DUMP: DUMP IS DONE
  DUMP: Level 0 dump on Tue Apr 18 18:06:15 2000

How to Perform Online Backups for Mirrors (Solstice DiskSuite/Solaris Volume Manager)

A mirrored metadevice can be backed up without unmounting it or taking the entire mirror offline. One of the submirrors must be taken offline temporarily, thus losing mirroring, but it can be placed online and resynced as soon as the backup is complete, without halting the system or denying user access to the data. Using mirrors to perform online backups creates a backup that is a “snapshot” of an active file system.

A problem might occur if a program writes data onto the volume immediately before the lockfs command is run. To prevent this problem, temporarily stop all the services running on this node. Also, be sure the cluster is running problem-free before performing the backup procedure.

  1. Become superuser on the cluster node you want to back up.

  2. Use the metaset(1M) command to determine which node has the ownership on the backed up volume.


    # metaset -s setname
    

    -s setname

    Specifies the diskset name.

  3. Use the lockfs(1M) command with the -w option to lock the file system from writes.


    # lockfs -w mountpoint 
    


    Note –

    You must lock the file system only if a UFS file system resides on the mirror. For example, if the metadevice is set up as a raw device for database management software or some other specific application, it would not be necessary to use the lockfs command. You may, however, want to run the appropriate vender-dependent utility to flush any buffers and lock access.


  4. Use the metastat(1M) command to determine the names of the submirrors.


    # metastat -s setname -p
    

    -p

    Displays the status in a format similar to the md.tab file.

  5. Use the metadetach(1M) command to take one submirror offline from the mirror.


    # metadetach -s setname mirror submirror
    


    Note –

    Reads will continue to be made from the other submirrors. However, the offline submirror will be out of sync as soon as the first write is made to the mirror. This inconsistency is corrected when the offline submirror is brought back online. You don't need to run fsck.


  6. Unlock the file systems and allow writes to continue, using the lockfs command with the -u option.


    # lockfs -u mountpoint 
    

  7. Perform a file system check.


    # fsck /dev/md/diskset/rdsk/submirror
    

  8. Back up the offline submirror to tape or another medium.

    Use the ufsdump(1M) command or whatever other backup utility you normally use.


    # ufsdump 0ucf dump-device submirror
    


    Note –

    Use the raw device (/rdsk) name for the submirror, rather than the block device (/dsk) name.


  9. Use the metattach(1M) command to place the metadevice back online.


    # metattach -s setname mirror submirror
    

    When the metadevice is placed online, it is automatically resynced with the mirror.

  10. Use the metastat command to verify that the submirror is resyncing.


    # metastat -s setname mirror
    

Example—Performing Online Backups for Mirrors (Solstice DiskSuite/Solaris Volume Manager)

In the following example, the cluster node phys-schost-1 is the owner of the metaset schost-1, therefore the backup procedure is performed from phys-schost-1. The mirror /dev/md/schost-1/dsk/d0 consists of the submirrors d10, d20, and d30.


[Determine the owner of the metaset:]
# metaset -s schost-1
Set name = schost-1, Set number = 1
Host                Owner
  phys-schost-1     Yes 
...
[Lock the file system from writes:] 
# lockfs -w /global/schost-1
[List the submirrors:]
# metastat -s schost-1 -p
schost-1/d0 -m schost-1/d10 schost-1/d20 schost-1/d30 1
schost-1/d10 1 1 d4s0
schost-1/d20 1 1 d6s0
schost-1/d30 1 1 d8s0
[Take a submirror offline:]
# metadetach -s schost-1 d0 d30
[Unlock the file system:]
# lockfs -u /
[Check the file system:]
# fsck /dev/md/schost-1/rdsk/d30
[Copy the submirror to the backup device:]
# ufsdump 0ucf /dev/rmt/0 /dev/md/schost-1/rdsk/d30
  DUMP: Writing 63 Kilobyte records
  DUMP: Date of this level 0 dump: Tue Apr 25 16:15:51 2000
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/md/schost-1/rdsk/d30 to /dev/rdsk/c1t9d0s0.
  ...
  DUMP: DUMP IS DONE
[Bring the submirror back online:]
# metattach -s schost-1 d0 d30
schost-1/d0: submirror schost-1/d30 is attached
[Resync the submirror:]
# metastat -s schost-1 d0
schost-1/d0: Mirror
    Submirror 0: schost-0/d10
      State: Okay         
    Submirror 1: schost-0/d20
      State: Okay
    Submirror 2: schost-0/d30
      State: Resyncing
    Resync in progress: 42% done
    Pass: 1
    Read option: roundrobin (default)
...

How to Perform Online Backups for Volumes (VERITAS Volume Manager)

VERITAS Volume Manager identifies a mirrored volume as a plex. A plex can be backed up without unmounting it or taking the entire volume offline. This is done by creating a snapshot copy of the volume and backing up this temporary volume without halting the system or denying user access to the data.

Be sure the cluster is running problem-free before performing the backup procedure.

  1. Log on to any node in the cluster, and become superuser on the current primary node for the disk group on the cluster.

  2. List the disk group information.


    # vxprint -g diskgroup
    

  3. Run the scstat(1M) command to see which node has the disk group currently imported, indicating it is the primary node for the disk group.


    # scstat -D
    

    -D

    Shows the status for all disk device groups.

  4. Create a snapshot of the volume using the vxassist(1M) command.


    # vxassist -g diskgroup snapstart volume
    


    Note –

    Creating a snapshot can take a long time depending on the size of your volume.


  5. Verify the new volume was created.


    # vxprint -g diskgroup
    

    When the snapshot is complete, a status of Snapdone displays in the State field for the selected disk group.

  6. Stop any data services that are accessing the file system.


    # scswitch -z -g resource-group[,...] -h ““
    


    Note –

    Stop all data services to ensure that the data file system is properly backed up. If no data services are running, you do not need to perform Step 6 and Step 8.


  7. Create a backup volume named bkup-vol and attach the snapshot volume to it using the vxassist command.


    # vxassist -g diskgroup snapshot volume bkup-vol
    

  8. Restart any data services that were stopped in Step 6, using the scswitch(1M) command.


    # scswitch -z -g resource-group[,...] -h node[,...]
    

  9. Verify the volume is now attached to the new volume bkup-vol using the vxprint command.


    # vxprint -g diskgroup
    

  10. Register the disk group configuration change.


    # scconf -c -D name=diskgroup,sync
    

  11. Check the backup volume using the fsck command.


    # fsck -y /dev/vx/rdsk/diskgroup/bkup-vol
    

  12. Perform a backup to copy the volume bkup-vol to tape or another medium.

    Use the ufsdump(1M) command or the backup utility you normally use.


    # ufsdump 0ucf dump-device /dev/vx/dsk/diskgroup/bkup-vol
    

  13. Remove the temporary volume using vxedit(1M).


    # vxedit -rf rm bkup-vol
    

  14. Register the disk group configuration changes using the scconf(1M) command.


    # scconf -c -D name=diskgroup,sync
    

Example—Performing Online Backups for Volumes (VERITAS Volume Manager)

In the following example, the cluster node phys-schost-2 is the primary owner of the metaset disk group schost-1, therefore the backup procedure is performed from phys-schost-2. The volume /vo101 is copied and then associated with a new volume, bkup-vol.


[Become superuser on the primary node.]
[Identify the current primary node for the disk group:]
# scstat -D
-- Device Group Servers --
                         Device Group     Primary           Secondary
                         ------------     -------           ---------
 Device group servers:   rmt/1            -                 -
 Device group servers:   schost-1         phys-schost-2     phys-schost-1

-- Device Group Status --
                             Device Group        Status              
                             ------------        ------              
 Device group status:        rmt/1               Offline
 Device group status:        schost-1            Online
[List the disk group information:]
# vxprint -g schost-1
TY NAME            ASSOC     KSTATE   LENGTH   PLOFFS STATE   TUTIL0  PUTIL0
dg schost-1       schost-1   -        -        -      -        -      -
  
dm schost-101     c1t1d0s2   -        17678493 -      -        -      -
dm schost-102     c1t2d0s2   -        17678493 -      -        -      -
dm schost-103     c2t1d0s2   -        8378640  -      -        -      -
dm schost-104     c2t2d0s2   -        17678493 -      -        -      -
dm schost-105     c1t3d0s2   -        17678493 -      -        -      -
dm schost-106     c2t3d0s2   -        17678493 -      -        -      -
 
v  vol01          gen        ENABLED  204800   -      ACTIVE   -      -
pl vol01-01       vol01      ENABLED  208331   -      ACTIVE   -      -
sd schost-101-01  vol01-01   ENABLED  104139   0      -        -      -
sd schost-102-01  vol01-01   ENABLED  104139   0      -        -      -
pl vol01-02       vol01      ENABLED  208331   -      ACTIVE   -      -
sd schost-103-01  vol01-02   ENABLED  103680   0      -        -      -
sd schost-104-01  vol01-02   ENABLED  104139   0      -        -      -
pl vol01-03       vol01      ENABLED  LOGONLY  -      ACTIVE   -      -
sd schost-103-02  vol01-03   ENABLED  5        LOG    -        -      -
[Start the snapshot operation:]
# vxassist -g schost-1 snapstart vol01
[Verify the new volume was created:]
# vxprint -g schost-1
TY NAME            ASSOC    KSTATE    LENGTH   PLOFFS STATE   TUTIL0  PUTIL0
dg schost-1       schost-1   -        -        -      -        -      -
  
dm schost-101     c1t1d0s2   -        17678493 -      -        -      -
dm schost-102     c1t2d0s2   -        17678493 -      -        -      -
dm schost-103     c2t1d0s2   -        8378640  -      -        -      -
dm schost-104     c2t2d0s2   -        17678493 -      -        -      -
dm schost-105     c1t3d0s2   -        17678493 -      -        -      -
dm schost-106     c2t3d0s2   -        17678493 -      -        -      -
  
v  vol01          gen        ENABLED  204800   -      ACTIVE   -      -
pl vol01-01       vol01      ENABLED  208331   -      ACTIVE   -      -
sd schost-101-01  vol01-01   ENABLED  104139   0      -        -      -
sd schost-102-01  vol01-01   ENABLED  104139   0      -        -      -
pl vol01-02       vol01      ENABLED  208331   -      ACTIVE   -      -
sd schost-103-01  vol01-02   ENABLED  103680   0      -        -      -
sd schost-104-01  vol01-02   ENABLED  104139   0      -        -      -
pl vol01-03       vol01      ENABLED  LOGONLY  -      ACTIVE   -      -
sd schost-103-02  vol01-03   ENABLED  5        LOG    -        -      -
pl vol01-04       vol01      ENABLED  208331   -      SNAPDONE -      -
sd schost-105-01  vol01-04   ENABLED  104139   0      -        -      -
sd schost-106-01  vol01-04   ENABLED  104139   0      -        -      -
[Stop data services, if necessary:]
# scswitch -z -g nfs-rg -h ““
[Create a copy of the volume:]
# vxassist -g schost-1 snapshot vol01 bkup-vol
[Restart data services, if necessary:]
# scswitch -z -g nfs-rg -h phys-schost-1
[Verify bkup-vol was created:]
# vxprint -g schost-1
TY NAME           ASSOC       KSTATE   LENGTH   PLOFFS STATE   TUTIL0  PUTIL0
dg schost-1       schost-1    -        -        -      -        -      -
 
dm schost-101     c1t1d0s2    -        17678493 -      -        -      -
...
 
v  bkup-vol       gen         ENABLED  204800   -      ACTIVE   -      -
pl bkup-vol-01    bkup-vol    ENABLED  208331   -      ACTIVE   -      -
sd schost-105-01  bkup-vol-01 ENABLED  104139   0      -        -      -
sd schost-106-01  bkup-vol-01 ENABLED  104139   0      -        -      -
 
v  vol01          gen         ENABLED  204800   -      ACTIVE   -      -
pl vol01-01       vol01       ENABLED  208331   -      ACTIVE   -      -
sd schost-101-01  vol01-01    ENABLED  104139   0      -        -      -
sd schost-102-01  vol01-01    ENABLED  104139   0      -        -      -
pl vol01-02       vol01       ENABLED  208331   -      ACTIVE   -      -
sd schost-103-01  vol01-02    ENABLED  103680   0      -        -      -
sd schost-104-01  vol01-02    ENABLED  104139   0      -        -      -
pl vol01-03       vol01       ENABLED  LOGONLY  -      ACTIVE   -      -
sd schost-103-02  vol01-03    ENABLED  5        LOG    -        -      -
[Synchronize the disk group with cluster framework:]
# scconf -c -D name=schost-1,sync
[Check the file systems:]
# fsck -y /dev/vx/rdsk/schost-1/bkup-vol
[Copy bkup-vol to the backup device:]
# ufsdump 0ucf /dev/rmt/0 /dev/vx/rdsk/schost-1/bkup-vol
  DUMP: Writing 63 Kilobyte records
  DUMP: Date of this level 0 dump: Tue Apr 25 16:15:51 2000
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/vx/dsk/schost-2/bkup-vol to /dev/rmt/0.
  ...
  DUMP: DUMP IS DONE
[Remove the bkup-volume:]
# vxedit -rf rm bkup-vol
[Synchronize the disk group:]
# scconf -c -D name=schost-1,sync

Restoring Cluster Files Overview

The ufsrestore(1M) command copies files to disk, relative to the current working directory, from backups created using the ufsdump(1M) command. You can use ufsrestore to reload an entire file system hierarchy from a level 0 dump and incremental dumps that follow it, or to restore one or more single files from any dump tape. If ufsrestore is run as superuser, files are restored with their original owner, last modification time, and mode (permissions).

Before you start to restore files or file systems, you need to know the following information.

Restoring Cluster Files

Table 9–2 Task Map: Restoring Cluster Files

Task 

For Instructions, Go To… 

For Solstice DiskSuite/Solaris Volume Manager, restore files interactively following Solaris restore procedures 

How to Restore Individual Files Interactively (Solstice DiskSuite/Solaris Volume Manager)

For Solstice DiskSuite/Solaris Volume Manager, restore the root (/) file system

How to Restore the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

  

How to Restore a root (/) File System That Was on a Metadevice (Solstice DiskSuite/Solaris Volume Manager)

For VERITAS Volume Manager, restore a non-encapsulated root (/) file system 

How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)

For VERITAS Volume Manager, restore an encapsulated root (/) file system 

How to Restore an Encapsulated root (/) File System (VERITAS Volume Manager)

How to Restore Individual Files Interactively (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore one or more individual files. Be sure the cluster is running problem-free before performing the restore procedure.

  1. Become superuser on the cluster node you want to restore.

  2. Stop all the data services that are using the files to be restored.


    # scswitch -z -g resource-group[,...] -h ““
    

  3. Restore the files using the ufsrestore command.

How to Restore the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore the root (/) file systems to a new disk, such as after replacing a bad root disk. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Become superuser on a cluster node with access to the metaset, other than the node you want to restore.

  2. Remove from all metasets the hostname of the node being restored.

    Run this command from a node in the metaset other than the node you are removing.


    # metaset -s setname -f -d -h nodelist
    

    -s setname

    Specifies the diskset name.

    -f

    Force.

    -d

    Deletes from the diskset.

    -h nodelist

    Specifies the name of the node to delete from the diskset.

  3. Replace the failed disk on the node on which the root (/) file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  4. Boot the node being restored.

    • If using the Solaris CD-ROM, run the following command:


      ok boot cdrom -s
      

    • If using a Solaris JumpStartTM server, run the following command:


      ok boot net -s
      

  5. Create all the partitions and swap on the root disk using the format(1M) command.

    Recreate the original partitioning scheme that was on the failed disk.

  6. Create the root (/) file system and other file systems as appropriate, using the newfs(1M) command.

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  7. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  8. Use the following commands to restore the root (/) file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

    The file system is now restored.

  9. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  10. Reboot the node in single-user mode.


    # reboot -- "-s"
    

  11. Replace the disk ID using the scdidadm(1M) command.


    # scdidadm -R rootdisk
    

  12. Use the metadb(1M) command to recreate the state database replicas.


    # metadb -c copies -af raw-disk-device
    

    -c copies

    Specifies the number of replicas to create.

    -f raw-disk-device

    Raw disk device on which to create replicas.

    -a

    Adds replicas.

  13. Reboot the node in cluster mode.

    1. Start the reboot.


      # reboot
      

      During this boot you might see an error or warning message, ending with the following instruction:


      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):

    2. Press CTRL-d to boot into multiuser mode.

  14. From a cluster node other than the restored node, use the metaset(1M) command to add the restored node to all metasets.


    phys-schost-2# metaset -s setname -a -h nodelist
    

    -a

    Creates and adds the host to the diskset.

    The node is rebooted into cluster mode. The cluster is ready to use.

Example—Restoring the root (/) File System (Solstice DiskSuite/Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the diskset schost-1. All other commands are run from phys-schost-1. A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4.


[Become superuser on a cluster node other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]
ok boot cdrom -s
[Use format and newfs to recreate partitions and file systems.]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Replace the disk ID:]
# scdidadm -R /dev/dsk/c0t0d0
[Recreate state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
# reboot
Press CTL-d to boot into multiuser mode.
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

How to Restore a root (/) File System That Was on a Metadevice (Solstice DiskSuite/Solaris Volume Manager)

Use this procedure to restore a root (/) file system that was on a metadevice when the backups were performed. Perform this procedure under circumstances such as when a root disk is corrupted and replaced with a new disk. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Become superuser on a cluster node with access to the metaset, other than the node you want to restore.

  2. Remove from all metasets the hostname of the node being restored.


    # metaset -s setname -f -d -h nodelist
    

    -s setname

    Specifies the metaset name.

    -f

    Force.

    -d

    Deletes from the metaset.

    -h nodelist

    Specifies the name of the node to delete from the metaset.

  3. Replace the failed disk on the node on which the root (/) file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  4. Boot the node being restored.

    • If using the Solaris CD-ROM, run the following command:


      ok boot cdrom -s
      

    • If using a JumpStart server, run the following command:


      ok boot net -s
      

  5. Create all the partitions and swap on the root disk using the format(1M) command.

    Recreate the original partitioning scheme that was on the failed disk.

  6. Create the root (/) file system and other file systems as appropriate, using the newfs(1M) command

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  7. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  8. Use the following commands to restore the root (/) file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    

  9. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  10. Remove the lines in the /temp-mountpoint/etc/system file for MDD root information.


    * Begin MDD root info (do not edit)
    forceload: misc/md_trans
    forceload: misc/md_raid
    forceload: misc/md_mirror
    forceload: misc/md_hotspares
    forceload: misc/md_stripe
    forceload: drv/pcipsy
    forceload: drv/glm
    forceload: drv/sd
    rootdev:/pseudo/md@0:0,10,blk
    * End MDD root info (do not edit)

  11. Edit the /temp-mountpoint/etc/vfstab file to change the root entry from a metadevice to a corresponding normal slice for each file system on the root disk that is part of the metadevice.


    Example: 
    Change from—
    /dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /usr   ufs   1     no       -

  12. Unmount the temporary file system, and check the raw disk device.


    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

  13. Reboot the node in single-user mode.


    # reboot -- "-s"
    

  14. Replace the disk ID using the scdidadm command.


    # scdidadm -R rootdisk
    

  15. Use the metadb(1M) command to recreate the state database replicas.


    # metadb -c copies -af raw-disk-device
    

    -c copies

    Specifies the number of replicas to create.

    -af raw-disk-device

    Creates initial state database replicas on the named raw disk device.

  16. Reboot the node in cluster mode.

    1. Start the reboot.


      # reboot
      

      During this boot you will see error or warning messages, ending with the following instruction:


      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):

    2. Press CTRL-d to boot into multiuser mode.

  17. From a cluster node other than the restored node, use the metaset(1M) command to add the restored node to all metasets.


    phys-schost-2# metaset -s setname -a -h nodelist
    

    -a

    Adds (creates) the metaset.

    Set up the metadevice/mirror for root (/) according to the Solstice DiskSuite documentation.

    The node is rebooted into cluster mode. The cluster is ready to use.

Example—Restoring a root (/) File System That Was on a Metadevice (Solstice DiskSuite/Solaris Volume Manager)

The following example shows the root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0. The metaset command is run from another node in the cluster, phys-schost-2, to remove and later add back node phys-schost-1 to the metaset schost-1. All other commands are run from phys-schost-1. A new boot block is created on /dev/rdsk/c0t0d0s0, and three state database replicas are recreated on /dev/rdsk/c0t0d0s4.


[Become superuser on a cluster node with access to the metaset, 
other than the node to be restored.]
[Remove the node from the metaset:]
phys-schost-2# metaset -s schost-1 -f -d -h phys-schost-1
[Replace the failed disk and boot the node:]
ok boot cdrom -s
[Use format and newfs to recreate partitions and file systems.]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Remove the lines in /temp-mountpoint/etc/system file for MDD root information:]
* Begin MDD root info (do not edit)
forceload: misc/md_trans
forceload: misc/md_raid
forceload: misc/md_mirror
forceload: misc/md_hotspares
forceload: misc/md_stripe
forceload: drv/pcipsy
forceload: drv/glm
forceload: drv/sd
rootdev:/pseudo/md@0:0,10,blk
* End MDD root info (do not edit)
[Edit the /temp-mountpoint/etc/vfstabfile]
Example: 
Change from—
/dev/md/dsk/d10   /dev/md/rdsk/d10    /      ufs   1     no       -

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  /usr   ufs   1     no       -
[Unmount the temporary file system and check the raw disk device:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Replace the disk ID:]
# scdidadm -R /dev/dsk/c0t0d0
[Recreate state database replicas:]
# metadb -c 3 -af /dev/rdsk/c0t0d0s4
# reboot
Type CTRL-d to boot into multiuser mode.
[Add the node back to the metaset:]
phys-schost-2# metaset -s schost-1 -a -h phys-schost-1

How to Restore a Non-Encapsulated root (/) File System (VERITAS Volume Manager)

Use this procedure to restore a non-encapsulated root (/) file system to a node. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  2. Boot the node being restored.

    • If using the Solaris CD-ROM, run the following command:


      ok boot cdrom -s
      

    • If using a JumpStart server, run the following command:


      ok boot net -s
      

  3. Create all the partitions and swap on the root disk using the format(1M) command.

    Recreate the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, using the newfs(1M) command.

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  6. Restore the root (/) file system from backup, and unmount and check the file system.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

    The file system is now restored.

  7. Install a new boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  8. Reboot the node into single-user mode.

    1. Start the reboot.


      # reboot -- "-s"

      During this boot you will see error or warning messages, ending with the following instruction:


      Type control-d to proceed with normal startup,
      (or give root password for system maintenance):

    2. Type the root password.

  9. Update the disk ID using the scdidadm command.


    # scdidadm -R /dev/rdsk/disk-device
    

  10. Press CTRL-d to resume in multiuser mode.

    The node reboots into cluster mode. The cluster is ready to use.

Example—Restoring a Non-Encapsulated root (/) File System (VERITAS Volume Manager)

The following example shows a non-encapsulated root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0.


[Replace the failed disk and boot the node:]
ok boot cdrom -s
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Update the disk ID:]
# scdidadm -R /dev/rdsk/c0t0d0
[Press CTRL-d to resume in multiuser mode]

How to Restore an Encapsulated root (/) File System (VERITAS Volume Manager)

Use this procedure to restore an encapsulated root (/) file system to a node. The node being restored should not be booted. Be sure the cluster is running problem-free before performing the restore procedure.


Note –

Since you must partition the new disk using the same format as the failed disk, identify the partitioning scheme before you begin this procedure, and recreate file systems as appropriate.


  1. Replace the failed disk on the node where the root file system will be restored.

    Refer to disk replacement procedures in the documentation that came with your server.

  2. Boot the node being restored.

    • If using the Solaris CD-ROM, run the following command:


      ok boot cdrom -s
      
    • If using a JumpStart server, run the following command:


      ok boot net -s
      
  3. Create all the partitions and swap on the root disk using the format(1M) command.

    Recreate the original partitioning scheme that was on the failed disk.

  4. Create the root (/) file system and other file systems as appropriate, using the newfs(1M) command.

    Recreate the original file systems that were on the failed disk.


    Note –

    Be sure to create the /global/.devices/node@nodeid file system.


  5. Mount the root (/) file system on a temporary mount point.


    # mount device temp-mountpoint
    

  6. Restore the root (/) file system from backup.


    # cd temp-mountpoint
    # ufsrestore rvf dump-device
    # rm restoresymtable
    

  7. Create an empty install-db file.

    This puts the node in VxVM install mode at the next reboot.


    # touch /temp-mountpoint/etc/vx/reconfig.d/state.d/install-db
    

  8. Remove or comment out the following entries from the /temp-mountpoint/etc/system file.


    * rootdev:/pseudo/vxio@0:0
    * set vxio:vol_rootdev_is_volume=1

  9. Edit the /temp-mountpoint/etc/vfstab file and replace all VxVM mount points with the standard disk devices for the root disk, such as /dev/dsk/c0t0d0s0.


    Example: 
    Change from—
    /dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol /      ufs   1     no -
    
    Change to—
    /dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -

  10. Unmount the temporary file system and check the file system.


    # cd /
    # umount temp-mountpoint
    # fsck raw-disk-device
    

  11. Install the boot block on the new disk.


    # /usr/sbin/installboot /usr/platform/`uname -i`/lib/fs/ufs/bootblk raw-disk-device
    

  12. Reboot the node in single-user mode.


    # reboot -- "-s"
    

  13. Update the disk ID using scdidadm(1M).


    # scdidadm -R /dev/rdsk/c0t0d0
    

  14. Run vxinstall to encapsulate the disk and reboot.


    # vxinstall
    

  15. If there is a conflict in minor number with any other system, unmount the global devices and reminor the disk group.

    • Unmount the global devices file system on the cluster node.


      # umount /global/.devices/node@nodeid
      

    • Reminor the rootdg disk group on the cluster node.


      # vxdg reminor rootdg 100
      

  16. Shut down and reboot the node in cluster mode.


    # shutdown -g0 -i6 -y
    

Example—Restoring an Encapsulated root (/) File System (VERITAS Volume Manager)

The following example shows an encapsulated root (/) file system restored to the node phys-schost-1 from the tape device /dev/rmt/0.


[Replace the failed disk and boot the node:]
ok boot cdrom -s
[Use format and newfs to create partitions and file systems]
[Mount the root file system on a temporary mount point:]
# mount /dev/dsk/c0t0d0s0 /a
[Restore the root file system:]
# cd /a
# ufsrestore rvf /dev/rmt/0
# rm restoresymtable
[Create an empty install-db file:]
# touch /a/etc/vx/reconfig.d/state.d/install-db
[Edit /etc/system on the temporary file system and 
remove or comment out the following entries:]
	# rootdev:/pseudo/vxio@0:0
	# set vxio:vol_rootdev_is_volume=1
[Edit /etc/vfstab on the temporary file system:]
Example: 
Change from—
/dev/vx/dsk/rootdg/rootvol /dev/vx/rdsk/rootdg/rootvol / ufs 1 no-

Change to—
/dev/dsk/c0t0d0s0 /dev/rdsk/c0t0d0s0  / ufs   1     no       -
[Unmount the temporary file system, then check the file system:]
# cd /
# umount /a
# fsck /dev/rdsk/c0t0d0s0
[Install a new boot block:]
# /usr/sbin/installboot /usr/platform/`uname \
-i`/lib/fs/ufs/bootblk /dev/rdsk/c0t0d0s0
[Reboot in single-user mode:]
# reboot -- "-s"
[Update the disk ID:]
# scdidadm -R /dev/rdsk/c0t0d0
[Run vxinstall:]
# vxinstall
Choose to encapsulate the root disk.
[If there is a conflict in minor number, reminor the rootdg disk group:]
# umount /global/.devices/node@nodeid
# vxdg reminor rootdg 100
# shutdown -g0 -i6 -y

Where to Go From Here

For instructions about how to mirror the encapsulated root disk, see the Sun Cluster 3.1 10/03 Software Installation Guide.