Task |
For Instructions, Go To… |
---|---|
Find the names of the file systems you want to back up | |
Calculate how many tapes you will need to contain a full backup |
How to Determine the Number of Tapes Needed for a Full Backup |
Back up the root file system | |
Perform online backup for mirrored or plexed file systems |
How to Perform Online Backups for Mirrors (Solstice DiskSuite/Solaris Volume Manager) |
SPARC: How to Perform Online Backups for Volumes (VERITAS Volume Manager) |
Use this procedure to determine the names of the file systems you want to back up.
Display the contents of the /etc/vfstab file.
You do not need to be superuser to run this command.
% more /etc/vfstab |
Look in the mount point column for the name of the file system you want to back up.
Use this name when you back up the file system.
% more /etc/vfstab |
In the following example, the names of available file systems listed in the /etc/vfstab file are displayed.
% more /etc/vfstab #device device mount FS fsck mount mount #to mount to fsck point type pass at boot options # #/dev/dsk/c1d0s2 /dev/rdsk/c1d0s2 /usr ufs 1 yes - f - /dev/fd fd - no - /proc - /proc proc - no - /dev/dsk/c1t6d0s1 - - swap - no - /dev/dsk/c1t6d0s0 /dev/rdsk/c1t6d0s0 / ufs 1 no - /dev/dsk/c1t6d0s3 /dev/rdsk/c1t6d0s3 /cache ufs 2 yes - swap - /tmp tmpfs - yes - |
Use this procedure to calculate the number of tapes you will need to back up a file system.
Become superuser on the cluster node you want to back up.
Estimate the size of the backup in bytes.
# ufsdump S filesystem |
Displays the estimated number of bytes needed to perform the backup.
Specifies the name of the file system you want to back up.
Divide the estimated size by the capacity of the tape to see how many tapes you need.
In the following example, the file system size of 905,881,620 bytes will easily fit on a 4 GB tape (905,881,620 ÷ 4,000,000,000).
# ufsdump S /global/phys-schost-1 905881620 |
Use this procedure to back up the root (/) file system of a cluster node. Be sure the cluster is running problem-free before performing the backup procedure.
Become superuser on the cluster node you want to back up.
Switch each running data service from the node to be backed up to another node in the cluster.
# scswitch -z -D disk-device-group[,...] -h node[,...] |
Performs the switch.
Name of the disk device group to be switched.
Name of the cluster node to switch the disk device group to. This node becomes the new primary.
Shut down the node.
# shutdown -g0 -y -i0 |
Reboot the node in non-cluster mode.
SPARC:
ok boot -x |
x86:
<<< Current Boot Parameters >>> Boot path: /pci@0,0/pci8086,2545@3/pci8086,1460@1d/pci8086,341a@7,1/ sd@0,0:a Boot args: Type b [file-name] [boot-flags] <ENTER> to boot with options or i <ENTER> to enter boot interpreter or <ENTER> to boot with defaults <<< timeout in 5 seconds >>> Select (b)oot or (i)nterpreter: b -x |
Back up the root (/) file system.
If the root disk is not encapsulated, use the following command.
# ufsdump 0ucf dump-device / |
If the root disk is encapsulated, use the following command.
# ufsdump 0ucf dump-device /dev/vx/rdsk/rootvol |
Refer to the ufsdump(1M) man page for more information.
Reboot the node in cluster mode.
# init 6 |
In the following example, the root (/) file system is backed up onto tape device /dev/rmt/0.
# ufsdump 0ucf /dev/rmt/0 / DUMP: Writing 63 Kilobyte records DUMP: Date of this level 0 dump: Tue Apr 18 18:06:15 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/rdsk/c0t0d0s0 (phys-schost-1:/) to /dev/rmt/0 DUMP: Mapping (Pass I) [regular files] DUMP: Mapping (Pass II) [directories] DUMP: Estimated 859086 blocks (419.48MB). DUMP: Dumping (Pass III) [directories] DUMP: Dumping (Pass IV) [regular files] DUMP: 859066 blocks (419.47MB) on 1 volume at 2495 KB/sec DUMP: DUMP IS DONE DUMP: Level 0 dump on Tue Apr 18 18:06:15 2000 |
A mirrored Solstice DiskSuite metadevice or Solaris Volume Manager volume can be backed up without unmounting it or taking the entire mirror offline. One of the submirrors must be taken offline temporarily, thus losing mirroring, but it can be placed online and resynced as soon as the backup is complete, without halting the system or denying user access to the data. Using mirrors to perform online backups creates a backup that is a “snapshot” of an active file system.
A problem might occur if a program writes data onto the volume immediately before the lockfs command is run. To prevent this problem, temporarily stop all the services running on this node. Also, be sure the cluster is running problem-free before performing the backup procedure.
Become superuser on the cluster node you want to back up.
Use the metaset(1M) command to determine which node has the ownership on the backed up volume.
# metaset -s setname |
Specifies the disk set name.
Use the lockfs(1M) command with the -w option to lock the file system from writes.
# lockfs -w mountpoint |
You must lock the file system only if a UFS file system resides on the mirror. For example, if the Solstice DiskSuite metadevice or Solaris Volume Manager volume is set up as a raw device for database management software or some other specific application, it would not be necessary to use the lockfs command. You may, however, want to run the appropriate vender-dependent utility to flush any buffers and lock access.
Use the metastat(1M) command to determine the names of the submirrors.
# metastat -s setname -p |
Displays the status in a format similar to the md.tab file.
Use the metadetach(1M) command to take one submirror offline from the mirror.
# metadetach -s setname mirror submirror |
Reads will continue to be made from the other submirrors. However, the offline submirror will be out of sync as soon as the first write is made to the mirror. This inconsistency is corrected when the offline submirror is brought back online. You don't need to run fsck.
Unlock the file systems and allow writes to continue, using the lockfs command with the -u option.
# lockfs -u mountpoint |
Perform a file system check.
# fsck /dev/md/diskset/rdsk/submirror |
Back up the offline submirror to tape or another medium.
Use the ufsdump(1M) command or the backup utility that you usually use.
# ufsdump 0ucf dump-device submirror |
Use the raw device (/rdsk) name for the submirror, rather than the block device (/dsk) name.
Use the metattach(1M) command to place the metadevice or volume back online.
# metattach -s setname mirror submirror |
When the metadevice or volume is placed online, it is automatically resynced with the mirror.
Use the metastat command to verify that the submirror is resyncing.
# metastat -s setname mirror |
In the following example, the cluster node phys-schost-1 is the owner of the metaset schost-1, therefore the backup procedure is performed from phys-schost-1. The mirror /dev/md/schost-1/dsk/d0 consists of the submirrors d10, d20, and d30.
[Determine the owner of the metaset:] # metaset -s schost-1 Set name = schost-1, Set number = 1 Host Owner phys-schost-1 Yes ... [Lock the file system from writes:] # lockfs -w /global/schost-1 [List the submirrors:] # metastat -s schost-1 -p schost-1/d0 -m schost-1/d10 schost-1/d20 schost-1/d30 1 schost-1/d10 1 1 d4s0 schost-1/d20 1 1 d6s0 schost-1/d30 1 1 d8s0 [Take a submirror offline:] # metadetach -s schost-1 d0 d30 [Unlock the file system:] # lockfs -u / [Check the file system:] # fsck /dev/md/schost-1/rdsk/d30 [Copy the submirror to the backup device:] # ufsdump 0ucf /dev/rmt/0 /dev/md/schost-1/rdsk/d30 DUMP: Writing 63 Kilobyte records DUMP: Date of this level 0 dump: Tue Apr 25 16:15:51 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/md/schost-1/rdsk/d30 to /dev/rdsk/c1t9d0s0. ... DUMP: DUMP IS DONE [Bring the submirror back online:] # metattach -s schost-1 d0 d30 schost-1/d0: submirror schost-1/d30 is attached [Resync the submirror:] # metastat -s schost-1 d0 schost-1/d0: Mirror Submirror 0: schost-0/d10 State: Okay Submirror 1: schost-0/d20 State: Okay Submirror 2: schost-0/d30 State: Resyncing Resync in progress: 42% done Pass: 1 Read option: roundrobin (default) ... |
VERITAS Volume Manager identifies a mirrored volume as a plex. A plex can be backed up without unmounting it or taking the entire volume offline. This is done by creating a snapshot copy of the volume and backing up this temporary volume without halting the system or denying user access to the data.
Be sure the cluster is running problem-free before performing the backup procedure.
Log on to any node in the cluster, and become superuser on the current primary node for the disk group on the cluster.
List the disk group information.
# vxprint -g diskgroup |
Run the scstat(1M) command to see which node has the disk group currently imported, indicating it is the primary node for the disk group.
# scstat -D |
Shows the status for all disk device groups.
Create a snapshot of the volume using the vxassist command.
# vxassist -g diskgroup snapstart volume |
Creating a snapshot can take a long time depending on the size of your volume.
Verify the new volume was created.
# vxprint -g diskgroup |
When the snapshot is complete, a status of Snapdone displays in the State field for the selected disk group.
Stop any data services that are accessing the file system.
# scswitch -z -g resource-group[,...] -h ““ |
Stop all data services to ensure that the data file system is properly backed up. If no data services are running, you do not need to perform Step 6 and Step 8.
Create a backup volume named bkup-vol and attach the snapshot volume to it using the vxassist command.
# vxassist -g diskgroup snapshot volume bkup-vol |
Restart any data services that were stopped in Step 6, using the scswitch(1M) command.
# scswitch -z -g resource-group[,...] -h node[,...] |
Verify the volume is now attached to the new volume bkup-vol using the vxprint command.
# vxprint -g diskgroup |
Register the disk group configuration change.
# scconf -c -D name=diskgroup,sync |
Check the backup volume using the fsck command.
# fsck -y /dev/vx/rdsk/diskgroup/bkup-vol |
Perform a backup to copy the volume bkup-vol to tape or another medium.
Use the ufsdump(1M) command or the backup utility you normally use.
# ufsdump 0ucf dump-device /dev/vx/dsk/diskgroup/bkup-vol |
Remove the temporary volume using vxedit.
# vxedit -rf rm bkup-vol |
Register the disk group configuration changes using the scconf(1M) command.
# scconf -c -D name=diskgroup,sync |
In the following example, the cluster node phys-schost-2 is the primary owner of the metaset disk group schost-1, therefore the backup procedure is performed from phys-schost-2. The volume /vo101 is copied and then associated with a new volume, bkup-vol.
[Become superuser on the primary node.] [Identify the current primary node for the disk group:] # scstat -D -- Device Group Servers -- Device Group Primary Secondary ------------ ------- --------- Device group servers: rmt/1 - - Device group servers: schost-1 phys-schost-2 phys-schost-1 -- Device Group Status -- Device Group Status ------------ ------ Device group status: rmt/1 Offline Device group status: schost-1 Online [List the disk group information:] # vxprint -g schost-1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg schost-1 schost-1 - - - - - - dm schost-101 c1t1d0s2 - 17678493 - - - - dm schost-102 c1t2d0s2 - 17678493 - - - - dm schost-103 c2t1d0s2 - 8378640 - - - - dm schost-104 c2t2d0s2 - 17678493 - - - - dm schost-105 c1t3d0s2 - 17678493 - - - - dm schost-106 c2t3d0s2 - 17678493 - - - - v vol01 gen ENABLED 204800 - ACTIVE - - pl vol01-01 vol01 ENABLED 208331 - ACTIVE - - sd schost-101-01 vol01-01 ENABLED 104139 0 - - - sd schost-102-01 vol01-01 ENABLED 104139 0 - - - pl vol01-02 vol01 ENABLED 208331 - ACTIVE - - sd schost-103-01 vol01-02 ENABLED 103680 0 - - - sd schost-104-01 vol01-02 ENABLED 104139 0 - - - pl vol01-03 vol01 ENABLED LOGONLY - ACTIVE - - sd schost-103-02 vol01-03 ENABLED 5 LOG - - - [Start the snapshot operation:] # vxassist -g schost-1 snapstart vol01 [Verify the new volume was created:] # vxprint -g schost-1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg schost-1 schost-1 - - - - - - dm schost-101 c1t1d0s2 - 17678493 - - - - dm schost-102 c1t2d0s2 - 17678493 - - - - dm schost-103 c2t1d0s2 - 8378640 - - - - dm schost-104 c2t2d0s2 - 17678493 - - - - dm schost-105 c1t3d0s2 - 17678493 - - - - dm schost-106 c2t3d0s2 - 17678493 - - - - v vol01 gen ENABLED 204800 - ACTIVE - - pl vol01-01 vol01 ENABLED 208331 - ACTIVE - - sd schost-101-01 vol01-01 ENABLED 104139 0 - - - sd schost-102-01 vol01-01 ENABLED 104139 0 - - - pl vol01-02 vol01 ENABLED 208331 - ACTIVE - - sd schost-103-01 vol01-02 ENABLED 103680 0 - - - sd schost-104-01 vol01-02 ENABLED 104139 0 - - - pl vol01-03 vol01 ENABLED LOGONLY - ACTIVE - - sd schost-103-02 vol01-03 ENABLED 5 LOG - - - pl vol01-04 vol01 ENABLED 208331 - SNAPDONE - - sd schost-105-01 vol01-04 ENABLED 104139 0 - - - sd schost-106-01 vol01-04 ENABLED 104139 0 - - - [Stop data services, if necessary:] # scswitch -z -g nfs-rg -h ““ [Create a copy of the volume:] # vxassist -g schost-1 snapshot vol01 bkup-vol [Restart data services, if necessary:] # scswitch -z -g nfs-rg -h phys-schost-1 [Verify bkup-vol was created:] # vxprint -g schost-1 TY NAME ASSOC KSTATE LENGTH PLOFFS STATE TUTIL0 PUTIL0 dg schost-1 schost-1 - - - - - - dm schost-101 c1t1d0s2 - 17678493 - - - - ... v bkup-vol gen ENABLED 204800 - ACTIVE - - pl bkup-vol-01 bkup-vol ENABLED 208331 - ACTIVE - - sd schost-105-01 bkup-vol-01 ENABLED 104139 0 - - - sd schost-106-01 bkup-vol-01 ENABLED 104139 0 - - - v vol01 gen ENABLED 204800 - ACTIVE - - pl vol01-01 vol01 ENABLED 208331 - ACTIVE - - sd schost-101-01 vol01-01 ENABLED 104139 0 - - - sd schost-102-01 vol01-01 ENABLED 104139 0 - - - pl vol01-02 vol01 ENABLED 208331 - ACTIVE - - sd schost-103-01 vol01-02 ENABLED 103680 0 - - - sd schost-104-01 vol01-02 ENABLED 104139 0 - - - pl vol01-03 vol01 ENABLED LOGONLY - ACTIVE - - sd schost-103-02 vol01-03 ENABLED 5 LOG - - - [Synchronize the disk group with cluster framework:] # scconf -c -D name=schost-1,sync [Check the file systems:] # fsck -y /dev/vx/rdsk/schost-1/bkup-vol [Copy bkup-vol to the backup device:] # ufsdump 0ucf /dev/rmt/0 /dev/vx/rdsk/schost-1/bkup-vol DUMP: Writing 63 Kilobyte records DUMP: Date of this level 0 dump: Tue Apr 25 16:15:51 2000 DUMP: Date of last level 0 dump: the epoch DUMP: Dumping /dev/vx/dsk/schost-2/bkup-vol to /dev/rmt/0. ... DUMP: DUMP IS DONE [Remove the bkup-volume:] # vxedit -rf rm bkup-vol [Synchronize the disk group:] # scconf -c -D name=schost-1,sync |