This section outlines the recovery processes that you use when an entire Oracle HSM file system is corrupted or lost. The procedures vary, depending on the type of file system involved and the type of backup and recovery preparations that you have made. But there are two basic tasks that you have to perform:
Before you begin, please note: if you are recovering from the loss of an Oracle HSM metadata server, make sure that you have finished restoring the Oracle HSM configuration, as described in Chapter 3, before proceeding further. The procedures in this chapter assume that the Oracle HSM software is installed and configured as it was prior to the loss of the file system.
Before you can recover files and directories, you must have somewhere to put them. So the first step in the recovery process is to create an empty, replacement file system. Proceed as follows:
Log in to the file-system metadata server as root
.
root@mds1:~#
Unmount the file system, if it is currently mounted. Use the command umount
mount-point
, where mount-point
is the directory on which the file system is mounted.
In the example, we unmount the file system /hsm/
hqfs1
:
root@mds1:~# umount /hsm/hqfs1 root@mds1:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor. Check the hardware configuration. If you have had to change hardware, edit the file accordingly and save the changes.
In the example, we replace the equipment identifiers for two failed disk devices with those of their replacements. Note that the equipment ordinals remain unchanged:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t4d0s5 102 md hqfs1 on # Tape library /dev/scsi/changer/c1t2d0 800 rb lib800 on .../lib800_cat /dev/rmt/0cbn 801 li lib800 on /dev/rmt/1cbn 802 li lib800 on :wq root@mds1:~#
Check the mcf
file for errors. Use the command sam-fsd
.
The sam-fsd
command is reads Oracle HSM configuration files and initializes the software. It will stop if it encounters an error:
root@mds1:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device. This is probably a typo in an equipment identifier field:
root@mds1:~# sam-fsd Problem in mcf file /etc/opt/SUNWsamfs/mcf for filesystem hqfs1 sam-fsd: Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed a letter o
instead of a 0 in the slice number part of the equipment name for device 102
, the second md
device:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf ... hqfs1 100 ms hqfs1 on /dev/dsk/c0t0d0s0 101 md hqfs1 on /dev/dsk/c0t3d0so 102 md hqfs1 on
So we correct the error, save the file, and recheck:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf ... hqfs1 100 ms hqfs1 on /dev/dsk/c0t0d0s0 101 md hqfs1 on /dev/dsk/c0t3d0s0 102 md hqfs1 on :wq root@mds1:~# sam-fsd
When the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
In the example, sam-fsd
runs without error:
root@mds1:~# sam-fsd Trace file controls: sam-amld /var/opt/SUNWsamfs/trace/sam-amld ... Would start sam-archiverd() Would start sam-stagealld() Would start sam-stagerd() Would start sam-amld() root@mds1:~#
Tell the Oracle HSM software to read the mcf
file and reconfigure itself accordingly:
root@mds1:~# samd config Configuring SAM-FS root@mds1:~#
Create the replacement file system. Use the command sammkfs
family-set-name
, where family-set-name
is the name of the file system.
In the example, we recreate file system hqfs1
:
root@mds1:~# sammkfs hqfs1 Building 'hqfs1' will destroy the contents of devices: /dev/dsk/c0t0d0s0 /dev/dsk/c0t3d0s0 Do you wish to continue? [y/N]yes total data kilobytes = ... root@mds1:~#
Recreate the mount point directory for the file system, if necessary.
In the example, we recreate the directory /hsm/
hqfs1
:
root@mds1:~# mkdir /hsm root@mds1:~# mkdir /hsm/hqfs1 root@mds1:~#
Back up the operating system's /etc/vfstab
file.
root@mds1:~# cp /etc/vfstab /etc/vfstab.backup root@mds1:~#
Open the /etc/vfstab
file in a text editor. If the /etc/vfstab
file does not contain mount parameters for the file system that you are restoring, you will have to restore the mount parameters.
In the example, the Oracle HSM server is installed on a replacement host. So the file contains no mount parameters for the file system that we are restoring, hqfs1
:
root@mds1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- --------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...
If possible, when you must restore mount parameters, open a backup copy of the original /etc/vfstab
file and copy the required line into the current /etc/vfstab
file. When the changes are complete, save the file and close the editor.
In the example, we have a backup copy, /zfs1/sam_config
/20161027
/etc/
vfstab
. So we copy the line for the hqfs1
file system from the backup copy and paste it into the current /etc/vfstab
file:
root@mds1:~# vi /zfs1/sam_config/20161027/etc/vfstab.20161027 #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- --------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hqfs1 samfs - yes stripe=1,bg :q
root@mds1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- --------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... hqfs1 - /hqfs1 samfs - yes stripe=1,bg :wq root@mds1:~#
Mount the file system.
In the example, we mount the file system hqfs1
:
root@mds1:~# mount /hsm/hqfs1 root@mds1:~#
Now, start restoring directories and files.
Once you have recreated the base file system, you can start to restore directories and files. There are two possible approaches:
Restoring files and directories from a samfsdump
(qfsdump
) recovery point file is by far the best option, if you have created and safely stored recovery points on a regular basis.
This approach returns the file system to full functionality immediately, because it restores the file-system metadata. An archiving file system can immediately access data on archival media and stage files back to the disk cache, either immediately or as-needed, when users access files. Files are restored with their original attributes.
If the recovery point contains data as well as metadata, this approach is also the only way to restore stand-alone (non-archiving) file systems that are not backed up by third-party applications.
Restoring files and directories from archival media without a recovery point file using a recovery script and the Oracle HSM star
utility.
samfsdump
(qfsdump
) Recovery Point FileWhenever possible, you should base file-system recovery efforts on the most recent available recovery point file. This approach is by far the fastest, most reliable, most thorough, and least labor-intensive way of recovering from the failure of a Oracle HSM file system. So, if a recovery point file exists, proceed as follows:
Log in to the file-system metadata server as root
.
root@mds1:~#
If you have not already done so, stop archiving and recycling using the procedures in "Stopping Archiving and Recycling Processes".
Identify the most recent available recovery point file.
In the example, we have been creating dated recovery point files for the file system hqfs1
in a well-known location, the subdirectory hqfs1_recovery
on the independent file system /zfs1
. So the latest file, 20161024
, is easy to find:
root@mds1:~# ls /zfs1/hqfs1_recovery/ 20161021 20161022 20161023 20161024 root@mds1:~#
Change to the mount-point directory for the recreated file system.
In the example, the recreated file system is mounted at /hsm/
hqfs1
:
root@mds1:~# cd /hsm/hqfs1 root@mds1:~#
Restore the entire file system relative to the current directory. Use the command samfsrestore
-T
-f
recovery-point-file
-g
logfile
or the QFS-only command qfsrestore
-T
-f
recovery-point-file
-g
logfile
, where:
-T
displays recovery statistics when the command terminates, including the number of files and directories processed and the number of errors and warnings.
-f
recovery-point-file
specifies the path and file name of the selected recovery point file.
-g
logfile
creates a list of the directories and files that were online when the recovery point was created and saves the list to the file specified by logfile
.
If you are restoring an archiving file system, this file can be used to automatically stage files from archival media, so that the disk cache is in the same state as it was at the time that the recovery point was created.
In the example, we restore the file system hqfs1
from the recovery point file /zfs1/hqfs1_recovery/20161024
. We log the online files in the file /root/20161024.log
:
root@mds1:~# samfsrestore -T -f /zfs1/hqfs1_recovery/20161024 -g /root/20161024.log samfsdump statistics: Files: 52020 Directories: 36031 Symbolic links: 0 Resource files: 8 File segments: 0 File archives: 0 Damaged files: 0 Files with data: 24102 File warnings: 0 Errors: 0 Unprocessed dirs: 0 File data bytes: 0 root@mds1:~#
If you have restored a standalone (non-archiving) file system, the file-system metadata and file data that were saved in the recovery-point file have been restored. Stop here.
Otherwise, restage archived files if required.
In most cases, do not restage files from archival media to disk following a file system recovery. Let users stage files as needed, by accessing them.
This approach automatically prioritizes staging according to user needs. It maximizes the availability of the file system at a time when it may have been offline for some time.
Only immediately required files are staged. So the total staging effort is spread over a period of time. This helps to insure that file system resources, such as drives, are always available for high priority tasks, such as archiving new files and staging urgently required user data.
This approach also reduces the administrative effort associated with recovery.
If you must restage the files that were resident in the disk cache prior to a failure, use the command /opt/SUNWsamfs/examples/
restore.sh
logfile
, where logfile
is the path and file name of the log file that you created with the -g
option of the samfsrestore
(qfsrestore
) command.
The restore.sh
script stages the files listed in the log file. These are the files that were online when the samfsrestore
(qfsrestore
) recovery point file was created.
If thousands of files need to be staged, consider splitting the log file into smaller files. Then run the restore.sh
script with each file in turn. This spreads the staging effort over a period of time and reduces interference with archiving and user-initiated staging.
The samfsrestore
process restores a copy of the file-system metadata from a recovery point file, so that you can find the corresponding data on tape and restore it to its proper locations in the file system. Recovery point files are created prior to the loss of the file system, however. So, inevitably, some of the metadata typically points to data locations that have changed since the recovery point was created. The file system has a record of these files but cannot locate their contents. So it sets the damaged flag on each such file.
In some cases, the data for a damaged file may, indeed, be lost. But in other cases, the restored metadata is simply out of date. The restored file system may not be able to find data for files that were archived or migrated after the recovery point was created simply because the restored metadata does not record a current location. In these cases, you may be able to undamage the files by locating the data yourself and then updating the restored metadata.
To locate missing data, update metadata, and undamage files, use the archiver log and media-migration log files (if any). Proceed as follows:
If you have not already done so, log in to the file-system metadata server as root
.
root@mds1:~#
Identify the most recent available archiver log file.
If the archiver log on the server is still available, it is likely to contain the most recent information. Otherwise, you will need to use a backup copy.
In the example, the archiver log file hqfs1.archiver.log
is on the server in the /var/adm/
subdirectory. We also have dated archiver log file copies in a well-known location, the subdirectory hqfs1_recovery/archlogs
on the independent file system /zfs1
. So the we have both the latest file, hqfs1.archiver.log
, and a recent backup, 20161024
:
root@mds1:~# dir /var/adm/*.archiver.log hqfs1.archiver.log root@mds1:~# dir /zfs1/hqfs1_recovery/archivelogs 20161022 20161023 20161024 root@mds1:~#
If files were recently migrated to replacement media, locate the migration logs as well.
Media migration logs are created for each source volume in the logging directory specified by the migrationd.cmd
file. Logs are named media-type
.
vsn
, where media-type
is one of the two-digit codes described in Appendix B, "Glossary of Equipment Types" and vsn
is the six-character, alphanumeric Volume Serial Number of the source volume.
The format of media-migration logs contain the same recovery information as archiver logs and can be used in the same fashion. For a description of the few format differences, see Appendix A, "Understanding Archiver and Migration Logs".
In the newly restored file system, identify any damaged files. Use the command sfind
mountpoint
-damaged
, where mountpoint
is the directory where the recovered file system is mounted.
In the example, we start the search in the directory /hsm/
hqfs1
and find six damaged files:
root@mds1:~# sfind /hsm/hqfs1 -damaged ./genfiles/ay0 ./genfiles/ay1 ./genfiles/ay2 ./genfiles/ay5 ./genfiles/ay6 ./genfiles/ay9 root@mds1:~#
Search the most recent copy of the archiver log for entries relating to each of the damaged files. Use the command grep
"
file-name-expression
"
archiver-log
, where file-name-expression
is a regular expression that matches the damaged file and archiver-log
is the path and name of the archiver log copy that you are examining.
In the example, we use the regular expression genfiles\/ay0
to search the most recent log file for entries relating to the file genfiles/ay0
:
root@mds1:~# grep "genfiles\/ay0 " /var/adm/hqfs1.archiver.log
When you find an entry for a file, note the media type, volume serial number, and position of the archive (tar
) file where the data file is archived. Also note the file type, since this will affect how you restore the file.
In the example, we locate an entry for the file genfiles/ay0
. The log entry shows that it was archived (A
) on October 24, 2016 at 9:49 PM using LTO (li
) volume VOL012
. The file is stored in the tape archive file located at hexadecimal position 0x78 (78
). The file is a regular file, type f
:
root@mds1:~# grep "genfiles\/ay0 " /var/adm/hqfs1.archiver.log A 2016/10/24 21:49:15 li VOL012 SLOT12 allsets.1 78.1 hqfs1 7131.14 8087 genfiles/ay0 f 0 51 root@mds1:~#
For a full explanation of the fields in archiver log entries, see Appendix A, "Understanding Archiver and Migration Logs".
If you do not find an entry for a damaged file in the current archiver log copy, repeat the search using any backup archive logs that were created after the recovery point file was created.
Archiver logs are rolled over frequently. So, if you retain multiple archiver log copies, you may be able to recover damaged files using archive copies that were made before the period covered by the current archiver log.
Next, look for files that were archived after the recovery point was created.
The samfsrestore
process restores a copy of the file-system metadata from a recovery point file, so that you can find the corresponding file-system data on tape and restore it to its proper locations in the file system. Recovery point files are created prior to the loss of the file system, however. They cannot contain metadata for files created and archived thereafter.
Typically, some files are archived after the last recovery point was created and prior to the loss of a file system. Since the metadata for these files are not in the recovery point file, samfsrestore
cannot recover them, even as damaged files. File data does, however, reside on archival media. So you can recreate the metadata and recover the files to their proper place in the file system using the archive logs. If files were migrated to replacement media prior to the loss of the file system, you can use media-migration logs as well.
If you have not already done so, log in to the file-system metadata server as root
.
root@mds1:~#
Identify the most recent available archiver log file.
If the archiver log on the server is still available, it is likely to contain the most recent information. Otherwise, you will need to use a backup copy.
In the example, the archiver log file hqfs1.archiver.log
is on the server in the /var/adm/
subdirectory. We also have dated archiver log file copies in a well-known location, the subdirectory hqfs1_recovery/archlogs
on the independent file system /zfs1
. So the we have both the latest file, hqfs1.archiver.log
, and a recent backup, 20161024
:
root@mds1:~# dir /var/adm/*.archiver.log hqfs1.archiver.log root@mds1:~# dir /zfs1/hqfs1_recovery/archivelogs 20161022 20161023 20161024 root@mds1:~#
If files were recently migrated to replacement media, locate the migration logs as well.
Media migration logs are created for each source volume in the logging directory specified by the migrationd.cmd
file. Logs are named media-type
.
vsn
, where media-type
is one of the two-digit codes described in Appendix B, "Glossary of Equipment Types" and vsn
is the six-character, alphanumeric Volume Serial Number of the source volume.
The format of media-migration logs contain the same recovery information as archiver logs and can be used in the same fashion. For a description of the few format differences, see Appendix A, "Understanding Archiver and Migration Logs".
Search the most recent copy of the archiver log for entries that were made after the recovery point was created. Use the command grep
"
time-date-expression
"
archiver-log
, where time-date-expression
is a regular expression that matches the date and time where you want to start searching and archiver-log
is the path and name of the archiver log copy that you are examining.
In the example, we lost the file system at 2:02 AM on October 25, 2016. The last recovery point file was made at 2:10 AM on October 24, 2016. So we use the regular expression ^A 2016\/10\/2[45]
to search the most recent log file for archived files that were logged on October 24 or 25:
root@mds1:~# grep "^A 2016\/10\/2[45]" /var/adm/hqfs1.archiver.log
When you find an entry for an archived copy of an unrestored file, note the path, name, file type, media type, and location information.
File types are listed as f
for regular files, R
for removable-media files, or S
for a data segment in a segmented file. The media type is a two-character code (see Appendix B, "Glossary of Equipment Types").
To locate the backup copy, you need the volume serial number of the media volume that stores the copy. If the copy is stored on sequential-access media, such as magnetic tape, also note the hexadecimal value that represents the starting position of the archive (tar
) file. If the copy is stored on random-access media, such as archival disk, note the path and file name of the tar
file relative to the volume serial number. Finally, if the file is segmented, note the segment length.
In the example below, the archiver log entries show that the following files were archived after the last recovery point was created:
root@mds1:~# grep "^A 2016\/10\/2[45]" /var/adm/hqfs1.archiver.log A 2016/10/24 10:43:18 li VOL002 all.1 111.1 hqfs1 1053.3 69 genfiles/hops f 0 0 A 2016/10/24 10:43:18 li VOL002 all.1 111.3 hqfs1 1051.1 104 genfiles/anic f 0 0 A 2016/10/24 13:09:05 li VOL004 all.1 212.1 hqfs1 1535.2 1971 genfiles/genA0 f 0 0 A 2016/10/24 13:09:06 li VOL004 all.1 212.20 hqfs1 1534.2 1497 genfiles/genA9 f 0 0 A 2016/10/24 13:10:15 li VOL004 all.1 212.3f hqfs1 1533.2 6491 genfiles/genA2 f 0 0 A 2016/10/24 13:12:25 li VOL003 all.1 2.5e hqfs1 1532.2 17717 genfiles/genA13 f 0 0 A 2016/10/24 13:12:28 li VOL003 all.1 2.7d hqfs1 1531.2 14472 genfiles/genA4 f 0 0 A 2016/10/24 13:12:40 li VOL003 all.1 2.9c hqfs1 1530.2 19971 genfiles/genA45 f 0 0 A 2016/10/24 21:49:15 dk DISKVOL1/f2 all.1 2.2e9 hqfs1 1511.2 8971 socfiles/spcC4 f 0 0 A 2016/10/24 21:49:15 dk DISKVOL1/f2 all.1 2.308 hqfs1 1510.2 7797 spcfiles/spcC5 f 0 0 A 2016/10/24 14:01:47 li VOL013 all.1 76a.1 hqfs1 14.5 10485760 bf/dat011/1 S 0 51 A 2016/10/24 14:04:11 li VOL013 all.1 76a.5002 hqfs1 15.5 10485760 bf/dat011/2 S 0 51 A 2016/10/24 14:06:24 li VOL013 all.1 1409aa4.1 hqfs1 16.5 184 bf/dat011/3 S 0 51 A 2016/10/24 18:28:51 li VOL036 all.1 12d.1 hqfs1 11731.1 89128448 rf/rf81 f 0 210 A 2016/10/24 18:28:51 li VOL034 all.1 15f.0 hqfs1 11731.1 525271552 rf/rf81 f 1 220 root@mds1:~#
We note the following information:
Eight regular (type f
) files are archived (A
) on LTO (li
) media: genfiles/
hops
and genfiles/
anic
at position 0x111
on volume VOL002
, genfiles/
genA0
, genfiles/
genA9
and genfiles/
genA2
at position 0x212
on volume VOL004
, and genfiles/
genA13
, genfiles/
genA4
, and genfiles/
genA45
at position 0x212
on volume VOL003
.
Two regular (type f
) files are archived (A
) on disk (dk
) media: spcfiles/
spcC4
and spcfiles/
spcC5
in archive file DISKVOL1
\f2
on volume DISKVOL1
.
One, three-part, segmented (type S
) file is archived on LTO (li
) media: bf/dat011
, in two segments starting at position 0x76a
and one segment starting at position 1409aa4
on volume VOL013
. Segment /1
is 10485760
bytes long, segment /2
is 10485622
bytes, and segment /3
is 184
bytes.
One, regular (type f
), volume overflow file archived (A
) on LTO (li
) media: rf/rf81
, starting at position 0x12d
on volume VOL036
and continuing at position 0x15f
on volume VOL034
.
For a full explanation of the fields in archiver log entries, see Appendix A, "Understanding Archiver and Migration Logs".
Repeat the search using any backup archive logs that were created after the recovery point file was created.
Archiver logs are rolled over frequently. So, if you retain multiple archiver log copies, you may be able to recover damaged files using archive copies that were made before the period covered by the current archiver log.
Given the media volume and the position of an archive (tar
) file on the media, restoring a missing or damaged file is simply a matter of accessing the tar
file and extracting the required data file. When the archive files reside on archival disk devices, this is simple, because the tar
files reside in randomly accessible directories under a file-system mount point. When the tar
file resides on high-capacity, sequential-access media like tape, however, there is an added complication: we cannot normally extract the required data file from the archive file until the latter is staged to a random-access disk device. Since archive files can be large, this can be time-consuming and awkward in a recovery situation. So the procedures below take advantage of the Oracle HSM command request
, which reads the archive files into memory and makes them available as if they were being read from disk.
Restore as many damaged and missing regular files as you can. For each file, proceed as follows:
Start by recovering regular files that do not span volumes. Use the procedure"Restore Lost and Damaged Regular Files".
Next, recover the segmented files. Use the procedure "Restore Lost and Damaged Segmented Files".
Then restore the regular files that do span volumes. Use the procedure "Restore Lost and Damaged Volume Overflow Files".
Once you have restored all missing and damaged files that have copies, re-enable archiving by removing wait
directives from the archiver.cmd
file. Re-enable recycling by removing -ignore
parameters from the recycler.cmd
file.
The file system is as close to its original condition as possible. Files that are still damaged or missing cannot be recovered.
Once you have restored all missing and damaged files that have copies, go to "Restoring Archiving File Systems to Normal Operation".
If you must recover a file system directly from the archival media, without the assistance of a recovery point file, you can do so. Proceed as follows:
If you are trying to restore files from optical media, stop here and contact Oracle support services for assistance.
Disable Network File System (NFS) sharing for the file system.
Disable archiving and recycling. Use the method outlined in "Stopping Archiving and Recycling Processes".
Reserve a tape drive for recovery. Use the command samcmd
unavail
drive-equipment-number
, where drive-equipment-number
is the equipment ordinal number assigned to the drive in the /etc/opt/SUNWsamfs/mcf
file.
The samcmd
unavail
command makes the drive unavailable to archiving, staging and releasing processes. In the example, we reserve drive 804
root@mds1:~# samcmd unavail 804 root@mds1:~#
Copy the file /opt/SUNWsamfs/examples/tarback.sh
to an alternate location, such as /tmp
.
The tarback.sh
file is an executable script that restores files from a specified set of media volumes. The script runs the command star
-n
against each archive (tar
) file on each volume. When a backup copy on tape has no corresponding file in the file system or when the copy on tape is newer than the corresponding file in the file system, star
-n
restores the copy.
In the example, we copy the script to /tmp
:
root@mds1:~# cp /opt/SUNWsamfs/examples/tarback.sh /tmp/tarback.sh root@mds1:~#
Open the copy of the tarback.sh
file in a text editor.
In the example, we use the vi
editor:
root@mds1:~# vi /tmp/tarback.sh #!/bin/sh # script to reload files from SAMFS archive tapes STAR="/opt/SUNWsamfs/sbin/star" LOAD="/opt/SUNWsamfs/sbin/load" UNLOAD="/opt/SUNWsamfs/sbin/unload" EQ=28 TAPEDRIVE="/dev/rmt/3cbn" # BLOCKSIZE is in units of 512 bytes (e.g. 256 for 128K) BLOCKSIZE=256 MEDIATYPE="lt" VSN_LIST="VSNA VSNB VSNC VSNZ" ...
If the Oracle HSM utilities star
, load
, and unload
are installed in non-standard locations, edit the default command paths in the copy of the tarback.sh
file.
In the example, all utilities are installed in the default locations, so no edits are needed:
root@mds1:~# vi /tmp/tarback.sh #!/bin/sh # script to reload files from SAMFS archive tapes STAR="/opt/SUNWsamfs/sbin/star" LOAD="/opt/SUNWsamfs/sbin/load" UNLOAD="/opt/SUNWsamfs/sbin/unload" ...
In the copy of the tarback.sh
file, locate the variable EQ
. Set its value to the equipment ordinal number of the drive that you reserved for recovery use.
In the example, we set EQ=804
:
root@mds1:~# vi /tmp/tarback.sh
#!/bin/sh
# script to reload files from SAMFS archive tapes
STAR="/opt/SUNWsamfs/sbin/star"
LOAD="/opt/SUNWsamfs/sbin/load"
UNLOAD="/opt/SUNWsamfs/sbin/unload"
EQ=804
...
In the copy of the tarback.sh
file, locate the variable TAPEDRIVE
. Set its value to the raw path to the device, enclosed in double quotation marks.
In the example, the raw path to device 804
is /dev/rmt/3cbn
:
root@mds1:~# vi /tmp/tarback.sh
#!/bin/sh
# script to reload files from SAMFS archive tapes
STAR="/opt/SUNWsamfs/sbin/star"
LOAD="/opt/SUNWsamfs/sbin/load"
UNLOAD="/opt/SUNWsamfs/sbin/unload"
EQ=804
TAPEDRIVE="/dev/rmt/3cbn"
...
In the copy of the tarback.sh
file, locate the variable BLOCKSIZE
. Set its value to the number of 512-byte units in the desired block size.
In the example, we want a 256-kilobyte block size for the LTO drive. So we specify 512
:
LOAD="/opt/SUNWsamfs/sbin/load"
UNLOAD="/opt/SUNWsamfs/sbin/unload"
EQ=804
TAPEDRIVE="/dev/rmt/3cbn"
BLOCKSIZE=512
...
In the copy of the tarback.sh
file, locate the variable MEDIATYPE
. Set its value to the two-character media-type code that Appendix B lists for the type of media that the drive supports. Enclose the media type in double quotation marks.
In the example, we are using an LTO-4 drive. So we specify li
:
EQ=804
TAPEDRIVE="/dev/rmt/3cbn"
BLOCKSIZE=512
MEDIATYPE="li"
...
In the copy of the tarback.sh
file, locate the variable VSN_LIST
. As its value, supply a space-delimited list of the volume serial numbers (VSNs) that identify tapes that might contain backup copies of your files. Enclose the list in double quotation marks.
In the example, we specify volumes VOL002
, VOL003
, VOL004
, VOL013
, VOL034
, and VOL036
:
EQ=804 TAPEDRIVE="/dev/rmt/3cbn" BLOCKSIZE=512 MEDIATYPE="lt" VSN_LIST="VOL002 VOL003 VOL004 VOL013 VOL034 VOL036" ...
Save the copy of the tarback.sh
file. Close the editor.
EQ=804
TAPEDRIVE="/dev/rmt/3cbn"
BLOCKSIZE=512
MEDIATYPE="lt"
VSN_LIST="VOL002 VOL003 VOL004 VOL013 VOL034 VOL036"
...
:wq
root@mds1:~#
Execute the /tmp/tarback.sh
script.
root@mds1:~# /tmp/tarback.sh
For each restored file, recreate user and group ownership, modes, extended attributes, and access control lists (ACLs), as necessary.
The /tmp/tarback.sh
script cannot restore these types of metadata.
Once you have run the /tmp/tarback.sh
script and finished recovering files, go to "Restoring Archiving File Systems to Normal Operation".