Skip Headers
Oracle® Hierarchical Storage Manager and StorageTek QFS Software File System Recovery Guide
Release 6.0
E42065-03
  Go To Documentation Library
Library
Go To Table Of Contents
Contents

Previous
Previous
 
Next
Next
 

4 Recovering File Systems

This section outlines the recovery processes that you use when an entire Oracle HSM file system is corrupted or lost. The procedures vary, depending on the type of file system involved and the type of backup and recovery preparations that you have made. But there are two basic tasks that you have to perform:

Before you begin, please note: if you are recovering from the loss of an Oracle HSM metadata server, make sure that you have finished Restoring the Oracle HSM Configuration, as described in Chapter 3, before proceeding further. The procedures in this chapter assume that the Oracle HSM software is installed and configured as it was prior to the loss of the file system.

Recreating the File System

Before you can recover files and directories, you must have somewhere to put them. So the first step in the recovery process is to create an empty, replacement file system. Proceed as follows:

Recreate the File System Using Backup Configuration Files and the sammkfs Command

  1. Log in to the file-system metadata server as root.

    root@solaris:~# 
    
  2. Unmount the file system, if it is currently mounted. Use the command umount mount-point, where mount-point is the directory on which the file system is mounted.

    In the example, we unmount the file system /samqfs1:

    root@solaris:~# umount /samqfs1
    root@solaris:~# 
    
  3. Open the /etc/opt/SUNWsamfs/mcf file in a text editor. Check the hardware configuration. If you have had to change hardware, edit the file accordingly and save the changes.

    In the example, we replace the equipment identifiers for two failed disk devices with those of their replacements. Note that the equipment ordinals remain unchanged:

    root@solaris:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment              Equipment  Equipment  Family     Device  Additional
    # Identifier             Ordinal    Type       Set        State   Parameters
    #----------------------- ---------  ---------  ---------  ------  -------------
    samqfs1                  100        ms         samqfs1    on
     /dev/dsk/c1t3d0s3        101        md         samqfs1    on
     /dev/dsk/c1t4d0s5        102        md         samqfs1    on 
    # Tape library
    /dev/scsi/changer/c1t2d0 800        rb         lib800     on     .../lib800_cat
     /dev/rmt/0cbn            801        li         lib800     on
     /dev/rmt/1cbn            802        li         lib800     on
    :wq
    root@solaris:~# 
    
  4. Check the mcf file for errors. Use the command sam-fsd.

    The sam-fsd command is reads Oracle HSM configuration files and initializes the software. It will stop if it encounters an error:

    root@solaris:~# sam-fsd
    
  5. If the sam-fsd command finds an error in the mcf file, edit the file to correct the error and recheck as described in the preceding step.

    In the example below, sam-fsd reports an unspecified problem with a device. This is probably a typo in an equipment identifier field:

    root@solaris:~# sam-fsd
    Problem in mcf file /etc/opt/SUNWsamfs/mcf for filesystem qfsms
    sam-fsd: Problem with file system devices.
    

    Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf file in an editor, we find that we have typed a letter o instead of a 0 in the slice number part of the equipment name for device 102, the second md device:

    root@solaris:~# vi /etc/opt/SUNWsamfs/mcf
    ...
    qfsms                100        ms         qfsms      on       
      /dev/dsk/c0t0d0s0   101        md         qfsms      on
      /dev/dsk/c0t3d0so   102        md         qfsms      on
    

    So we correct the error, save the file, and recheck:

    root@solaris:~# vi /etc/opt/SUNWsamfs/mcf
    ...
    qfsms                100        ms         qfsms      on       
      /dev/dsk/c0t0d0s0   101        md         qfsms      on
      /dev/dsk/c0t3d0s0   102        md         qfsms      on
    :wq
    root@solaris:~# sam-fsd
    
  6. When the sam-fsd command runs without error, the mcf file is correct. Proceed to the next step.

    In the example, sam-fsd runs without error:

    root@solaris:~# sam-fsd
    Trace file controls:
    sam-amld      /var/opt/SUNWsamfs/trace/sam-amld
    ...
    Would start sam-archiverd()
    Would start sam-stagealld()
    Would start sam-stagerd()
    Would start sam-amld()
    root@solaris:~# 
    
  7. Tell the Oracle HSM software to read the mcf file and reconfigure itself accordingly:

    root@solaris:~# samd config
    Configuring SAM-FS
    root@solaris:~# 
    
  8. Create the replacement file system. Use the command sammkfs family-set-name, where family-set-name is the name of the file system.

    In the example, we recreate file system samqfs1:

    root@solaris:~# sammkfs samqfs1
    Building 'samqfs1' will destroy the contents of devices:
      /dev/dsk/c0t0d0s0
      /dev/dsk/c0t3d0s0
    Do you wish to continue? [y/N]yes
    total data kilobytes       = ...
    root@solaris:~# 
    
  9. Recreate the mount point directory for the file system, if necessary.

    In the example, we recreate the directory /samqfs1:

    root@solaris:~# mkdir /samqfs1
    root@solaris:~# 
    
  10. Back up the operating system's /etc/vfstab file.

    root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
    root@solaris:~# 
    
  11. Open the /etc/vfstab file in a text editor. If the /etc/vfstab file does not contain mount parameters for the file system that you are restoring, you will have to restore the mount parameters.

    In the example, the Oracle HSM server is installed on a replacement host. So the file contains no mount parameters for the file system that we are restoring, samqfs1:

    root@solaris:~# vi /etc/vfstab
    #File
    #Device    Device   Mount     System  fsck  Mount    Mount
    #to Mount  to fsck  Point     Type    Pass  at Boot  Options
    #--------  -------  --------  ------  ----  -------  ---------------------
    /devices   -        /devices  devfs   -     no       -
    /proc      -        /proc     proc    -     no       -
    ...
    
  12. If possible, when you must restore mount parameters, open a backup copy of the original /etc/vfstab file and copy the required line into the current /etc/vfstab file. When the changes are complete, save the file and close the editor.

    In the example, we have a backup copy, /zfs1/sam_config/20140127/etc/vfstab. So we copy the line for the samqfs1 file system from the backup copy and paste it into the current /etc/vfstab file:

    root@solaris:~# vi /zfs1/sam_config/20140127/etc/vfstab.20140127
    #File
    #Device    Device   Mount     System  fsck  Mount    Mount
    #to Mount  to fsck  Point     Type    Pass  at Boot  Options
    #--------  -------  --------  ------  ----  -------  ---------------------
    /devices   -        /devices  devfs   -     no       -
    /proc      -        /proc     proc    -     no       -
    ...
    samqfs1    -        /samqfs1  samfs   -     yes      stripe=1,bg      
    :q
    
    root@solaris:~# vi /etc/vfstab
    #File
    #Device    Device   Mount     System  fsck  Mount    Mount
    #to Mount  to fsck  Point     Type    Pass  at Boot  Options
    #--------  -------  --------  ------  ----  -------  ---------------------
    /devices   -        /devices  devfs   -     no       -
    /proc      -        /proc     proc    -     no       -
    ...
    samqfs1    -        /samqfs1  samfs   -     yes      stripe=1,bg      
    :wq
    root@solaris:~# 
    
  13. Mount the file system.

    In the example, we mount the file system samqfs1:

    root@solaris:~# mount /samqfs1
    root@solaris:~# 
    
  14. Now, start Restoring Directories and Files.

Restoring Directories and Files

Once you have recreated the base file system, you can start to restore directories and files. There are two possible approaches:

  • Restoring Files and Directories from a samfsdump (qfsdump) Recovery Point File is by far the best option, if you have created and safely stored recovery points on a regular basis.

    This approach returns the file system to full functionality immediately, because it restores the file-system metadata. An archiving file system can immediately access data on archival media and stage files back to the disk cache, either immediately or as-needed, when users access files. Files are restored with their original attributes.

    If the recovery point contains data as well as metadata, this approach is also the only way to restore stand-alone (non-archiving) file systems that are not backed up by third-party applications.

  • Restoring Files and Directories from Archival Media without a Recovery Point File using a recovery script and the Oracle HSM star utility.

Restoring Files and Directories from a samfsdump (qfsdump) Recovery Point File

Whenever possible, you should base file-system recovery efforts on the most recent available recovery point file. This approach is by far the fastest, most reliable, most thorough, and least labor-intensive way of recovering from the failure of a Oracle HSM file system. So, if a recovery point file exists, proceed as follows:

Restore the Lost File System from a Recovery Point File

  1. Log in to the file-system metadata server as root.

    root@solaris:~# 
    
  2. If you have not already done so, stop archiving and recycling using the procedures in "Stopping Archiving and Recycling Processes".

  3. Identify the most recent available recovery point file.

    In the example, we have been creating dated recovery point files for the file system samqfs1 in a well-known location, the subdirectory samqfs1_recovery on the independent file system /zfs1. So the latest file, 20140324, is easy to find:

    root@solaris:~# ls /zfs1/samqfs1_recovery/
    20140321    20140322    20140323    20140324
    root@solaris:~# 
    
  4. Change to the mount-point directory for the recreated file system.

    In the example, the recreated file system is mounted at /samqfs1:

    root@solaris:~# cd /samqfs1
    root@solaris:~# 
    
  5. Restore the entire file system relative to the current directory. Use the command samfsrestore -T -f recovery-point-file -g logfile or the QFS-only command qfsrestore -T -f recovery-point-file -g logfile, where:

    • -T displays recovery statistics when the command terminates, including the number of files and directories processed and the number of errors and warnings.

    • -f recovery-point-file specifies the path and file name of the selected recovery point file.

    • -g logfile creates a list of the directories and files that were online when the recovery point was created and saves the list to the file specified by logfile.

      If you are restoring an archiving file system, this file can be used to automatically stage files from archival media, so that the disk cache is in the same state as it was at the time that the recovery point was created.

    In the example, we restore the file system samqfs1 from the recovery point file /zfs1/samqfs1_recovery/20140324. We log the online files in the file /root/20140324.log (note that the command below is entered as a single line—the line break is escaped by the backslash character):

    root@solaris:~# samfsrestore -T -f /zfs1/samqfs1_recovery/20140324 \
    -g /root/20140324.log
          samfsdump statistics:
                    Files:              52020
                    Directories:        36031
                    Symbolic links:     0
                    Resource files:     8
                    File segments:      0
                    File archives:      0
                    Damaged files:      0
                    Files with data:    24102
                    File warnings:      0
                    Errors:             0
                    Unprocessed dirs:   0
                    File data bytes:    0
    root@solaris:~# 
    
  6. If you have restored a standalone (non-archiving) file system, the file-system metadata and file data that were saved in the recovery-point file have been restored. Stop here.

  7. Otherwise, Restage Archived Files If Required.

Restage Archived Files If Required

  1. In most cases, do not restage files from archival media to disk following a file system recovery. Let users stage files as needed, by accessing them.

    This approach automatically prioritizes staging according to user needs. It maximizes the availability of the file system at a time when it may have been offline for some time. Only immediately required files are staged. So the total staging effort is spread over a period of time. This helps to insure that file system resources, such as drives, are always available for high priority tasks, such as archiving new files and staging urgently required user data.

    This approach also reduces the administrative effort associated with recovery.

  2. If you must restage the files that were resident in the disk cache prior to a failure, use the command /opt/SUNWsamfs/examples/restore.sh logfile, where logfile is the path and file name of the log file that you created with the -g option of the samfsrestore (qfsrestore) command.

    The restore.sh script stages the files listed in the log file. These are the files that were online when the samfsrestore (qfsrestore) recovery point file was created.

    If thousands of files need to be staged, consider splitting the log file into smaller files. Then run the restore.sh script with each file in turn. This spreads the staging effort over a period of time and reduces interference with archiving and user-initiated staging.

  3. Now Identify Damaged Files and Locate Replacement Copies.

Identify Damaged Files and Locate Replacement Copies

The samfsrestore process restores a copy of the file-system metadata from a recovery point file, so that you can find the corresponding file-system data on tape and restore it to its proper locations in the file system. Recovery point files are created prior to the loss of the file system, however. So, inevitably, some of the metadata typically points to data locations that have changed since the recovery point was created. The file system has a record of these files but cannot locate their contents. So it sets the damaged flag on each such file.

In some cases, the data for a damaged file may, indeed, be lost. But in other cases, the restored metadata is simply out of date. The restored file system may not be able to find data for files that were archived after the recovery point was created simply because the restored metadata does not record a current location. In these cases, you may be able to undamage the files by locating the data yourself and then updating the restored metadata.

To locate missing data, update metadata, and undamage files, use the archiver logs. Proceed as follows:

  1. If you have not already done so, log in to the file-system metadata server as root.

    root@solaris:~# 
    
  2. Identify the most recent available archiver log file.

    If the archiver log on the server is still available, it is likely to contain the most recent information. Otherwise, you will need to use a backup copy.

    In the example, the archiver log file samqfs1.archiver.log is on the server in the /var/adm/ subdirectory. We also have dated archiver log file copies in a well-known location, the subdirectory samqfs1_recovery/archlogs on the independent file system /zfs1. So the we have both the latest file, samqfs1.archiver.log, and a recent backup, 20150324:

    root@solaris:~# dir /var/adm/*.archiver.log
    samqfs1.archiver.log
    root@solaris:~# dir /zfs1/samqfs1_recovery/archivelogs
    20150322    20150323    20150324
    root@solaris:~# 
    
  3. In the newly restored file system, identify any damaged files. Use the command sfind mountpoint -damaged, where mountpoint is the directory where the recovered file system is mounted.

    In the example, we start the search in the directory /samqfs1 and find six damaged files:

    root@solaris:~# sfind /samqfs1 -damaged
    ./genfiles/ay0
    ./genfiles/ay1
    ./genfiles/ay2
    ./genfiles/ay5
    ./genfiles/ay6
    ./genfiles/ay9
    root@solaris:~# 
    
  4. Search the most recent copy of the archiver log for entries relating to each of the damaged files. Use the command grep "file-name-expression" archiver-log, where file-name-expression is a regular expression that matches the damaged file and archiver-log is the path and name of the archiver log copy that you are examining.

    In the example, we use the regular expression genfiles\/ay0 to search the most recent log file for entries relating to the file genfiles/ay0:

    root@solaris:~# grep "genfiles\/ay0 " /var/adm/samqfs1.archiver.log
    
  5. When you find an entry for a file, note the media type, volume serial number, and position of the archive (tar) file where the data file is archived. Also note the file type, since this will affect how you restore the file.

    In the example, we locate an entry for the file genfiles/ay0. The log entry shows that it was archived (A) on March 4, 2015 at 9:49 PM using LTO (li) volume VOL012. The file is stored in the tape archive file located at hexadecimal position 0x78 (78). The file is a regular file, type f:

    root@solaris:~# grep "genfiles\/ay0 " /var/adm/samqfs1.archiver.log
    A 2015/03/04 21:49:15 li VOL012 SLOT12 allsets.1 78.1 samqfs1 7131.14 8087 genfiles/ay0 f 0 51
    root@solaris:~# 
    

    For a full explanation of the fields in archiver log entries, see Appendix A, "Understanding the Archiver Log".

  6. If you do not find an entry for a damaged file in the current archiver log copy, repeat the search using any backup archive logs that were created after the recovery point file was created.

    Archiver logs are rolled over frequently. So, if you retain multiple archiver log copies, you may be able to recover damaged files using archive copies that were made before the period covered by the current archiver log.

  7. Next, Look for Missing Files that Were Archived After the Recovery Point Was Created.

Look for Missing Files that Were Archived After the Recovery Point Was Created

The samfsrestore process restores a copy of the file-system metadata from a recovery point file, so that you can find the corresponding file-system data on tape and restore it to its proper locations in the file system. Recovery point files are created prior to the loss of the file system, however. They cannot contain metadata for files created and archived after they were themselves created.

Typically, some files are archived after the last recovery point was created and prior to the loss of a file system. Since the metadata for these files are not in the recovery point file, samfsrestore cannot recover them, even as damaged files. File data does, however, reside on archival media. So you can recreate the metadata and recover the files to their proper place in the file system using the archive logs.

  1. If you have not already done so, log in to the file-system metadata server as root.

    root@solaris:~# 
    
  2. Identify the most recent available archiver log file.

    If the archiver log on the server is still available, it is likely to contain the most recent information. Otherwise, you will need to use a backup copy.

    In the example, the archiver log file samqfs1.archiver.log is on the server in the /var/adm/ subdirectory. We also have dated archiver log file copies in a well-known location, the subdirectory samqfs1_recovery/archlogs on the independent file system /zfs1. So the we have both the latest file, samqfs1.archiver.log, and a recent backup, 20150324:

    root@solaris:~# dir /var/adm/*.archiver.log
    samqfs1.archiver.log
    root@solaris:~# dir /zfs1/samqfs1_recovery/archivelogs
    20150322    20150323    20150324
    root@solaris:~# 
    
  3. Search the most recent copy of the archiver log for entries that were made after the recovery point was created. Use the command grep "time-date-expression" archiver-log, where time-date-expression is a regular expression that matches the date and time where you want to start searching and archiver-log is the path and name of the archiver log copy that you are examining.

    In the example, we lost the file system at 2:02 AM on March 24, 2015. The last recovery point file was made at 2:10 AM on March 23, 2015. So we use the regular expression ^A 2015\/03\/2[45] to search the most recent log file for archived files that were logged on March 23 or 24:

    root@solaris:~# grep "^A 2015\/03\/2[34]" /var/adm/samqfs1.archiver.log
    
  4. When you find an entry for an archived copy of an unrestored file, note the path, name, file type, media type, and location information.

    File types are listed as f for regular files, R for removable-media files, or S for a data segment in a segmented file. The media type is a two-character code (see Appendix B).

    To locate the backup copy, you need the volume serial number of the media volume that stores the copy. If the copy is stored on sequential-access media, such as magnetic tape, also note the hexadecimal value that represents the starting position of the archive (tar) file. If the copy is stored on random-access media, such as archival disk, note the path and file name of the tar file relative to the volume serial number. Finally, if the file is segmented, note the segment length.

    In the example below, the archiver log entries show that the following files were archived after the last recovery point was created:

    root@solaris:~# grep "^A 2015\/03\/2[34]" /var/adm/samqfs1.archiver.log
    A 2015/03/23 10:43:18 li VOL002 all.1 111.1 samqfs1 1053.3 69 genfiles/hops f 0 0
    A 2015/03/23 10:43:18 li VOL002 all.1 111.3 samqfs1 1051.1 104 genfiles/anic f 0 0
    A 2015/03/23 13:09:05 li VOL004 all.1 212.1 samqfs1 1535.2 1971 genfiles/genA0 f 0 0
    A 2015/03/23 13:09:06 li VOL004 all.1 212.20 samqfs1 1534.2 1497 genfiles/genA9 f 0 0
    A 2015/03/23 13:10:15 li VOL004 all.1 212.3f samqfs1 1533.2 6491 genfiles/genA2 f 0 0
    A 2015/03/23 13:12:25 li VOL003 all.1 2.5e samqfs1 1532.2 17717 genfiles/genA13 f 0 0
    A 2015/03/23 13:12:28 li VOL003 all.1 2.7d samqfs1 1531.2 14472 genfiles/genA4 f 0 0
    A 2015/03/23 13:12:40 li VOL003 all.1 2.9c samqfs1 1530.2 19971 genfiles/genA45 f 0 0
    A 2015/03/23 21:49:15 dk DISKVOL1/f2 all.1 2.2e9 samqfs1 1511.2 8971 socfiles/spcC4 f 0 0
    A 2015/03/23 21:49:15 dk DISKVOL1/f2 all.1 2.308 samqfs1 1510.2 7797 spcfiles/spcC5 f 0 0
    A 2015/03/23 14:01:47 li VOL013 all.1 76a.1 samqfs1 14.5 10485760 bf/dat011/1 S 0 51
    A 2015/03/23 14:04:11 li VOL013 all.1 76a.5002 samqfs1 15.5 10485760 bf/dat011/2 S 0 51
    A 2015/03/23 14:06:24 li VOL013 all.1 1409aa4.1 samqfs1 16.5 184 bf/dat011/3 S 0 51
    A 2015/03/23 18:28:51 li VOL036 all.1 12d.1 samqfs1 11731.1 89128448  rf/rf81 f 0 210
    A 2015/03/23 18:28:51 li VOL034 all.1 15f.0 samqfs1 11731.1 525271552 rf/rf81 f 1 220
    root@solaris:~# 
    

    We note the following information:

    • Eight regular (type f) files are archived (A) on LTO (li) media: genfiles/hops and genfiles/anic at position 0x111 on volume VOL002, genfiles/genA0, genfiles/genA9 and genfiles/genA2 at position 0x212 on volume VOL004, and genfiles/genA13, genfiles/genA4, and genfiles/genA45 at position 0x212 on volume VOL003.

    • Two regular (type f) files are archived (A) on disk (dk) media: spcfiles/spcC4 and spcfiles/spcC5 in archive file DISKVOL1 \f2 on volume DISKVOL1.

    • One, three-part, segmented (type S) file is archived on LTO (li) media: bf/dat011, in two segments starting at position 0x76a and one segment starting at position 1409aa4 on volume VOL013. Segment /1 is 10485760 bytes long, segment /2 is 10485622 bytes, and segment /3 is 184 bytes.

    • One, regular (type f), volume overflow file archived (A) on LTO (li) media: rf/rf81, starting at position 0x12d on volume VOL036 and continuing at position 0x15f on volume VOL034.

    For a full explanation of the fields in archiver log entries, see Appendix A, "Understanding the Archiver Log".

  5. Repeat the search using any backup archive logs that were created after the recovery point file was created.

    Archiver logs are rolled over frequently. So, if you retain multiple archiver log copies, you may be able to recover damaged files using archive copies that were made before the period covered by the current archiver log.

  6. Now Restore the Damaged and/or Missing Files.

Restore the Damaged and/or Missing Files

Given the media volume and the position of an archive (tar) file on the media, restoring a missing or damaged file is simply a matter of accessing the tar file and extracting the required data file. When the archive files reside on archival disk devices, this is simple, because the tar files reside in randomly accessible directories under a file-system mount point. When the tar file resides on high-capacity, sequential-access media like tape, however, there is an added complication: we cannot normally extract the required data file from the archive file until the latter is staged to a random-access disk device. Since archive files can be large, this can be time-consuming and awkward in a recovery situation. So the procedures below take advantage of the Oracle HSM command request, which reads the archive files into memory and makes them available as if they were being read from disk.

Restore as many damaged and missing regular files as you can. For each file, proceed as follows:

  1. Start by recovering regular files that do not span volumes. Use the procedure"Restore Lost and Damaged Regular Files".

  2. Next, recover the segmented files. Use the procedure "Restore Lost and Damaged Segmented Files".

  3. Then restore the regular files that do span volumes. Use the procedure "Restore Lost and Damaged Volume Overflow Files".

  4. Once you have restored all missing and damaged files that have copies, re-enable archiving by removing wait directives from the archiver.cmd file. Re-enable recycling by removing -ignore parameters from the recycler.cmd file.

    The file system is as close to its original condition as possible. Files that are still damaged or missing cannot be recovered.

  5. Once you have restored all missing and damaged files that have copies, go to "Restoring Archiving File Systems to Normal Operation".

Restoring Files and Directories from Archival Media without a Recovery Point File

If you must recover a file system directly from the archival media, without the assistance of a recovery point file, you can do so. Proceed as follows:

  1. If you are trying to restore files from optical media, stop here and contact Oracle support services for assistance.

  2. Disable Network File System (NFS) sharing for the file system.

  3. Disable archiving and recycling. Use the method outlined in "Stopping Archiving and Recycling Processes".

  4. Reserve a tape drive for the exclusive use of the recovery process. Use the command samcmd unavail drive-equipment-number, where drive-equipment-number is the equipment ordinal number assigned to the drive in the /etc/opt/SUNWsamfs/mcf file.

    The samcmd unavail command makes the drive unavailable to archiving, staging and releasing processes. In the example, we reserve drive 804

    root@solaris:~# samcmd unavail 804
    root@solaris:~# 
    
  5. Copy the file /opt/SUNWsamfs/examples/tarback.sh to an alternate location, such as /tmp.

    The tarback.sh file is an executable script that restores files from a specified set of media volumes. The script runs the command star -n against each archive (tar) file on each volume. When a backup copy on tape has no corresponding file in the file system or when the copy on tape is newer than the corresponding file in the file system, star -n restores the copy.

    In the example, we copy the script to /tmp:

    root@solaris:~# cp /opt/SUNWsamfs/examples/tarback.sh /tmp/tarback.sh
    root@solaris:~# 
    
  6. Open the copy of the tarback.sh file in a text editor.

    In the example, we use the vi editor:

    root@solaris:~# vi /opt/SUNWsamfs/examples/tarback.sh
    #!/bin/sh
    #   script to reload files from SAMFS archive tapes
    STAR="/opt/SUNWsamfs/sbin/star"
    LOAD="/opt/SUNWsamfs/sbin/load"
    UNLOAD="/opt/SUNWsamfs/sbin/unload"
    EQ=28
    TAPEDRIVE="/dev/rmt/3cbn"
    # BLOCKSIZE is in units of 512 bytes (e.g. 256 for 128K)
    BLOCKSIZE=256
    MEDIATYPE="lt"
    VSN_LIST="VSNA VSNB VSNC VSNZ"
    ...
    
  7. If the Oracle HSM utilities star, load, and unload are installed in non-standard locations, edit the default command paths in the copy of the tarback.sh file.

    In the example, all utilities are installed in the default locations, so no edits are needed:

    root@solaris:~# vi /opt/SUNWsamfs/examples/tarback.sh
    #!/bin/sh
    #   script to reload files from SAMFS archive tapes
    STAR="/opt/SUNWsamfs/sbin/star"
    LOAD="/opt/SUNWsamfs/sbin/load"
    UNLOAD="/opt/SUNWsamfs/sbin/unload"
    ...
    
  8. In the copy of the tarback.sh file, locate the variable EQ. Set its value to the equipment ordinal number of the drive that you reserved for recovery use.

    In the example, we set EQ=804:

    root@solaris:~# vi /opt/SUNWsamfs/examples/tarback.sh
    #!/bin/sh
    #   script to reload files from SAMFS archive tapes
    STAR="/opt/SUNWsamfs/sbin/star"
    LOAD="/opt/SUNWsamfs/sbin/load"
    UNLOAD="/opt/SUNWsamfs/sbin/unload"
    EQ=804
    ...
    
  9. In the copy of the tarback.sh file, locate the variable TAPEDRIVE. Set its value to the raw path to the device, enclosed in double quotation marks.

    In the example, the raw path to device 804 is /dev/rmt/3cbn:

    root@solaris:~# vi /opt/SUNWsamfs/examples/tarback.sh
    #!/bin/sh
    #   script to reload files from SAMFS archive tapes
    STAR="/opt/SUNWsamfs/sbin/star"
    LOAD="/opt/SUNWsamfs/sbin/load"
    UNLOAD="/opt/SUNWsamfs/sbin/unload"
    EQ=804
    TAPEDRIVE="/dev/rmt/3cbn"
    ...
    
  10. In the copy of the tarback.sh file, locate the variable BLOCKSIZE. Set its value to the number of 512-byte units in the desired block size.

    In the example, we want a 256-kilobyte segment size for the LTO-4 drive. So we specify 512:

    LOAD="/opt/SUNWsamfs/sbin/load"
    UNLOAD="/opt/SUNWsamfs/sbin/unload"
    EQ=804
    TAPEDRIVE="/dev/rmt/3cbn"
    BLOCKSIZE=512
    ...
    
  11. In the copy of the tarback.sh file, locate the variable MEDIATYPE. Set its value to the two-character media-type code that Appendix B lists for the type of media that the drive supports. Enclose the media type in double quotation marks.

    In the example, we are using an LTO-4 drive. So we specify li:

    EQ=804
    TAPEDRIVE="/dev/rmt/3cbn"
    BLOCKSIZE=512
    MEDIATYPE="li"
    ...
    
  12. In the copy of the tarback.sh file, locate the variable VSN_LIST. As its value, supply a space-delimited list of the volume serial numbers (VSNs) that identify tapes that might contain backup copies of your files. Enclose the list in double quotation marks.

    In the example, we specify volumes VOL002, VOL003, VOL004, VOL013, VOL034, and VOL036:

    EQ=804
    TAPEDRIVE="/dev/rmt/3cbn"
    BLOCKSIZE=512
    MEDIATYPE="lt"
    VSN_LIST="VOL002 VOL003 VOL004 VOL013 VOL034 VOL036"
    ...
    
  13. Save the copy of the tarback.sh file. Close the editor.

    EQ=804
    TAPEDRIVE="/dev/rmt/3cbn"
    BLOCKSIZE=512
    MEDIATYPE="lt"
    VSN_LIST="VOL002 VOL003 VOL004 VOL013 VOL034 VOL036"
    ...
    :wq
    root@solaris:~# 
    
  14. Execute the /tmp/tarback.sh script.

    root@solaris:~# /tmp/tarback.sh
    
  15. For each restored file, recreate user and group ownership, modes, extended attributes, and access control lists (ACLs), as necessary.

    The /tmp/tarback.sh script cannot restore these types of metadata.

  16. Once you have run the /tmp/tarback.sh script and finished recovering files, go to "Restoring Archiving File Systems to Normal Operation".