Sun Cluster System Administration Guide for Solaris OS

ProcedureHow to Perform Online Backups for Mirrors (Solaris Volume Manager)

A mirrored Solstice DiskSuite metadevice or Solaris Volume Manager volume can be backed up without unmounting it or taking the entire mirror offline. One of the submirrors must be taken offline temporarily, thus losing mirroring, but it can be placed online and resynchronized as soon as the backup is complete, without halting the system or denying user access to the data. Using mirrors to perform online backups creates a backup that is a “snapshot” of an active file system.

A problem might occur if a program writes data onto the volume immediately before the lockfs command is run. To prevent this problem, temporarily stop all the services running on this node. Also, ensure the cluster is running without errors before performing the backup procedure.

The phys-schost# prompt reflects a global-cluster prompt. Perform this procedure on a global cluster.

This procedure provides the long forms of the Sun Cluster commands. Most commands also have short forms. Except for the long and short forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix B, Sun Cluster Object-Oriented Commands.

  1. Become superuser or assume an equivalent role on the cluster node that you are backing up.

  2. Use the metaset(1M) command to determine which node has the ownership on the backed-up volume.


    # metaset -s setname
    
    -s setname

    Specifies the disk set name.

  3. Use the lockfs(1M) command with the -w option to lock the file system from writes.


    # lockfs -w mountpoint 
    

    Note –

    You must lock the file system only if a UFS file system resides on the mirror. For example, if the Solstice DiskSuite metadevice or Solaris Volume Manager volume is set up as a raw device for database management software or some other specific application, you do not need to use the lockfs command. You might, however, run the appropriate vendor-dependent utility to flush any buffers and lock access.


  4. Use the metastat(1M) command to determine the names of the submirrors.


    # metastat -s setname -p
    
    -p

    Displays the status in a format similar to the md.tab file.

  5. Use the metadetach(1M) command to take one submirror offline from the mirror.


    # metadetach -s setname mirror submirror
    

    Note –

    Reads continue to be made from the other submirrors. However, the offline submirror is unsynchronized as soon as the first write is made to the mirror. This inconsistency is corrected when the offline submirror is brought back online. You do not need to run fsck.


  6. Unlock the file systems and allow writes to continue, using the lockfs command with the -u option.


    # lockfs -u mountpoint 
    
  7. Perform a file system check.


    # fsck /dev/md/diskset/rdsk/submirror
    
  8. Back up the offline submirror to tape or another medium.

    Use the ufsdump(1M) command or the backup utility that you usually use.


    # ufsdump 0ucf dump-device submirror
    

    Note –

    Use the raw device (/rdsk) name for the submirror, rather than the block device (/dsk) name.


  9. Use the metattach(1M) command to place the metadevice or volume back online.


    # metattach -s setname mirror submirror
    

    When the metadevice or volume is placed online, it is automatically resynchronized with the mirror.

  10. Use the metastat command to verify that the submirror is resynchronizing.


    # metastat -s setname mirror
    

Example 12–4 Performing Online Backups for Mirrors (Solaris Volume Manager)

In the following example, the cluster node phys-schost-1 is the owner of the metaset schost-1, therefore the backup procedure is performed from phys-schost-1. The mirror /dev/md/schost-1/dsk/d0 consists of the submirrors d10 , d20, and d30.


[Determine the owner of the metaset:]
# metaset -s schost-1
Set name = schost-1, Set number = 1
Host                Owner
  phys-schost-1     Yes 
...
[Lock the file system from writes:] 
# lockfs -w /global/schost-1
[List the submirrors:]
# metastat -s schost-1 -p
schost-1/d0 -m schost-1/d10 schost-1/d20 schost-1/d30 1
schost-1/d10 1 1 d4s0
schost-1/d20 1 1 d6s0
schost-1/d30 1 1 d8s0
[Take a submirror offline:]
# metadetach -s schost-1 d0 d30
[Unlock the file system:]
# lockfs -u /
[Check the file system:]
# fsck /dev/md/schost-1/rdsk/d30
[Copy the submirror to the backup device:]
# ufsdump 0ucf /dev/rmt/0 /dev/md/schost-1/rdsk/d30
  DUMP: Writing 63 Kilobyte records
  DUMP: Date of this level 0 dump: Tue Apr 25 16:15:51 2000
  DUMP: Date of last level 0 dump: the epoch
  DUMP: Dumping /dev/md/schost-1/rdsk/d30 to /dev/rdsk/c1t9d0s0.
  ...
  DUMP: DUMP IS DONE
[Bring the submirror back online:]
# metattach -s schost-1 d0 d30
schost-1/d0: submirror schost-1/d30 is attached
[Resynchronize the submirror:]
# metastat -s schost-1 d0
schost-1/d0: Mirror
    Submirror 0: schost-0/d10
      State: Okay         
    Submirror 1: schost-0/d20
      State: Okay
    Submirror 2: schost-0/d30
      State: Resyncing
    Resync in progress: 42% done
    Pass: 1
    Read option: roundrobin (default)
...