C H A P T E R  4

Backing Up Data

This chapter provides the backup and dump processes and information you need in order to keep your data safe and prepare for any disaster.

This chapter includes the following subsections:


Guarding Against or Troubleshooting Data Loss

TABLE 4-1 shows the usual causes of data loss, with notes and suggestions about how to avoid or respond to each type of loss.


TABLE 4-1 Causes of Data Loss, With Notes and Suggestions

Causes

Notes

Suggestions

User Error

Sun StorEdge QFS file systems are protected from access by unauthorized users because of the UNIX superuser mechanism.

You can also restrict administrative actions to an optional administrative group.

 

System reconfiguration

File systems can be made unavailable by any of the following:

  • Dynamically-configured SAN components
  • Overwritten system configuration files
  • Failure of connectivity components

Rebuild the file system only after verifying that a configuration problem is not the cause of the apparent failure. See Precautions Before Starting Data Restoration and To Troubleshoot an Inaccessible File System, and Recovering From Catastrophic Failure.

Hardware failure

Using disk storage systems managed by hardware RAID has the following advantages over systems managed using software RAID:

  • More reliability
  • Fewer resources are consumed on the host system
  • Better performance

Hardware-based inconsistencies in Sun StorEdge QFS file systems can be checked and fixed by unmounting the file system and running samfsck(1M) command.

Use hardware RAID disk storage systems wherever possible.

 

 

 

 

 

Use samfsck(1M) to check and fix hardware-based file system consistency problems. See To Troubleshoot an Inaccessible File System for an example. Also see Recovering From Catastrophic Failure.



Precautions Before Starting Data Restoration

Some apparent data losses are actually caused by cabling problems or configuration changes.



caution icon

Caution - Do not reformat a disk, relabel a tape, or make other irreversible changes until you are convinced that the data on the disk or tape is completely unrecoverable.
Make sure to eliminate the fundamental causes for a failure before making irreversible changes. Back up anything you change before you change it, if possible.



Take the steps in the following procedure, "To Troubleshoot an Inaccessible File System," before commencing a data recovery process.


procedure icon  To Troubleshoot an Inaccessible File System

1. Check cables and terminators.

2. If you cannot read a tape or magneto-optical cartridge, try cleaning the heads in the drive, or try reading the cartridge in a different drive.

3. Check the current state of your hardware configuration against the documented hardware configuration.

Go to Step 4 only when you are certain that a configuration error is not to blame.

4. Unmount the file system, and run samfsck(1M).

For example:


# umount file_system_name
# samfsck file_system_name

5. If you find the file system is still inaccessible, use the procedures in the other chapters in this manual to restore the file system.


Prerequisites for Data Recovery

For SAM-QFS file systems, the following are prerequisites for disaster recovery:

The effectiveness of any of the SAM-QFS recovery methods relies primarily on frequent archiving being done.

See Metadata Used in Disaster Recovery.

If recent metadata is not available, archiver logs can help you recreate the filesystem directly from archive media.

See Using Archiver Logs.



Note - Using archiver logs is much more time consuming that using metadata to retrieve data, so this approach should not be relied upon. It is not used unless there is no alternative.




Metadata Used in Disaster Recovery

Metadata consists of information about files, directories, access control lists, symbolic links, removable media, segmented files, and the indexes of segmented files. Metadata must be restored before lost data can be retrieved.

With the up-to-date metadata, the data can be restored as follows:

.inodes File Characteristics

In Sun StorEdge QFS file systems, the .inodes file contains all the metadata except for the directory namespace (which consists of the pathnames to the directories where the files are stored). The.inodes file is located in the root (/) directory of the file system. For a file system to be restored, the .inodes file is needed along with the additional metadata.

FIGURE 4-1 illustrates some characteristics of the .inodes file. The arrows with the dashed lines indicate that the .inodes file points to file contents on disk and to the directory namespace. The namespace also points back to the .inodes file. Also indicated is that, in SAM-QFS file systems where archiving is being done, the .inodes file also points to archived copies.


Figure showing relationship between QFS nad SAM-QFS file systems.

FIGURE 4-1 The .inodes File in Sun StorEdge QFS File Systems

Note - Sun StorEdge QFS has no archiving capability. See the Sun StorEdge QFS Installation and Upgrade Guide for how to back up Sun StorEdge QFS metadata.



The .inodes file is not archived. For more about protecting the .inodes file in these types of file systems, see SAM-QFS Disaster Recovery Features and Backing Up the Metadata in SAM-QFS File Systems.

More About Directory Pathnames

As indicated in FIGURE 4-1, the namespace (in the form of directories) does not point to the archive media. The directory pathnames for each archived file are copied into the headers of the tar(1) files on the archive media that contain the files, but for reasons illustrated elsewhere (in TABLE 4-3), the directory pathnames in the tar file headers may get out of sync with the actual locations of the files on the disk.

One reason why the two pathnames can get out of sync is that the pathnames in the tar file header do not show the originating file system. TABLE 4-2 shows how the directory pathname shown in the left column would appear in the tar file header in the right column, without the component that shows the name of the originating file system /samfs1.


TABLE 4-2 Comparing a Full Pathname With a Pathname in a tar Header

Full Pathname

Pathname in tar Header on Archive Media

/samfs1/dir1/filea

dir1/

dir1/filea


TABLE 4-3 summarizes an example scenario, shows the result, and suggests a precaution.


TABLE 4-3 Example of Potential Pitfalls

Scenario

Result

Precaution

File is saved to disk, archived, then later moved, either by use of the mv(1) command or by restoration from a samfsdump(1M) output file using samfsrestore(1M) into an alternate path or file system.

  • Archive copy is still valid.
  • .inodes file still points to the archive media
  • Pathname in the tar file header no longer matches the namespace on disk.
  • Name of the file system is not available in the tar file header.

Keep the data from each file system on its own unique set of tapes or other archive media, and do not mix data from multiple file systems.


The potential for inconsistency does not interfere with recovery in most cases, because the directory pathnames in the tar headers are not used when data is being recovered from an archive. The directory pathnames on the tar headers on the archive media are only used in an unlikely disaster recovery scenario where no metadata is available and the file system must be reconstructed from scratch using the tar command.


SAM-QFS Disaster Recovery Features

The features of SAM-QFS file systems described in TABLE 4-4 streamline and speed up data restoration and minimize the risk of losing data in the case of unplanned system outage.


TABLE 4-4 Disaster Recovery Features of SAM-QFS File Systems

Feature

Comparison

Advantage

Identification records, serial writes, and error checking are dynamically used to check and manage file system consistency.

Eliminates the need to check file systems (by running the fsck(1M) command) before re-mounting the file systems or to rely on journal recovery mechanisms.

Speed. Because each file system is already checked and repaired when the server reboots after an outage, the server gets back into production more quickly.

Files are archived transparently and continuously. Archiving is configurable: after specified sleep intervals, via scheduled cron(1M) jobs, or on demand.

Nightly or weekly backups interfere with normal use of the system while the backups are being done and protection is not continuous.

Data protection. Because archiving is continuous, there are no gaps in data protection. Data backups no longer interfere with production.

Data can remain on disk or can be automatically released from the disk and then transparently staged back from archive media when needed.

Files no longer need to take up disk space. Files that are removed from the disk are instantly available without administrator intervention.

Speed. Disk space requirements may be lessened without inconvenience to users.

Files can be archived to as many as four separate media, each of which can be of a different type, and with Sun SAM-Remote, to remote locations.

Multiple copies can be easily made in multiple locations.

Data Protection. With the potential for multiple copies at multiple locations, the loss of one copy or even of an entire location does not mean a complete loss of data.

Files are archived in standard tar(1) format files.

tar files can be restored onto any file system type.

Flexibility. SAM-QFS file systems do not need to be available.

Metadata can be restored separately from data. Restoration of the files' contents to disk is configurable: files can be staged only when they are accessed or in advance of anticipated need.

Restoring metadata allows users to access the system and their data without waiting until all data is restored to disk.

Speed. Access to the server is quicker than if all data needed to be restored before user access was allowed.



Guidelines for Performing Dumps

At any given time, some files need to be archived because they are new, while others need to be rearchived because they are modified or because their archive media is being recycled. TABLE 4-5 defines the terms that apply to files archived onto archive media.


TABLE 4-5 Terms Related to Dumping Metadata

Term

When Used

Comments

stale

The archived copy does not match the online file.

A new copy must be created. Stale files can be detected using the sls command with the -D option. See the sls(1M) man page.

expired

No inode points to the archived copy.

A new archive copy was already created, and the file's inode correctly points to the new archive copy.


Dumping metadata during a time when files are not being created or modified avoids the dumping of metadata for files that are stale and minimizes the creation of damaged files.

When any stale files exist while metadata and file data are being dumped, the samfsdump command generates a warning message. The following warning message is displayed for any files that do not have an up-to-date archive copy:


/pathname/filename: Warning! File data will not be recoverable (file will be marked damaged).



caution icon

Caution - If you see the above message and do not rerun the samfsdumpcommand after the specified file is archived, the file will not be retrievable.



If samfsrestore(1M) is later used to attempt to restore the damaged file, the following message is displayed:


/pathname/filename: Warning! File data was previously not recoverable (file is marked damaged).


Backing Up the Metadata in SAM-QFS File Systems

In SAM-QFS file systems, the archiver(1M) command can copy both file data and metadata--other than the .inodes file--to archive media. For example, if you create a SAM-QFS file system with a family-set name of samfs1, you can tell the archiver command to create an archive set also called samfs1. (See the archiver.cmd(4) man page for more information.) You can later retrieve damaged or destroyed file systems, files, and directories as long as the archive media onto which the archive copy was written has not been erased and as long as a recent metadata dump file is available.

The samfsdump(1M) command allows you to back up metadata separately from the file system data. The samfsdump command creates metadata dumps (including the .inodes file) either for a complete file system or of a portion of a file system. A cron(1M) job can be set up to automate the process.

If you dump metadata often enough using samfsdump, the metadata is always available to restore file data from the archives using samfsrestore(1M).



Note - Files written to the file system after metadata dumps begin might not be archived, and archive copies on cartridges might not be reflected in the metadata dump. Consequently, the files might not be known to the system if the dump is used to restore the file system. Files written to the file system or archived after the metadata dump are picked up during the next metadata dump.



In summary, using the samfsdump method to dump metadata has the following advantages:

During a file system restoration, files and directories are assigned new inode numbers based on directory location; only the required number of inodes are assigned. Inodes are assigned as the samfsrestore process restores the directory structure.

File data is defragmented because files that were written in a combination of small disk allocation units (DAUs) and large DAUs are staged back to the disk using appropriately sized DAUs.


Creating samfsdump Dump Files

If you have multiple SAM-QFS file systems, make sure that you routinely dump the metadata for every file system. Look in /etc/vfstab for all file systems of type samfs.

Make sure to save the dump for each file system in a separate file.

The following procedures describe how to find all the samfs type file systems and to dump metadata using samfsdump(1M):



Note - The examples in these procedures use the names /sam1 for a SAM-QFS file system mount point and /dump_sam1 for the dump file system.



Using samfsdump With the -u Option

The samfsdump(1M) command -u option causes unarchived file data to be interspersed with the metadata. Note the following about the use of the -u option:


procedure icon  To Find Sun StorEdge QFS File Systems

single-step bulletLook in the vfstab(4) file to find mount points for all samfs-type file systems.

CODE EXAMPLE 4-1 shows three file systems of type samfs with the file system names samfs1, samfs2, and samfs3. The mount points are /sam1, /sam2, and /sam3.


CODE EXAMPLE 4-1 File Systems Defined in /etc/vfstab
# vi /etc/vfstab
samfs1 -       /sam1 samfs   -       no high=80,low=70,partial=8
samfs2 -       /sam2 samfs   -       no high=80,low=50
samfs3 -       /sam3 samfs   -       no high=80,low=50


procedure icon  To Create a Sun StorEdge SAM-FS Metadata Dump File Manually Using File System Manager

Taking a metadata snapshot through the File System Manager interface is the equivalent of using the samfsdump command from the command line. You can take a metadata snapshot from the File System Manager interface at any time.

To take a metadata snapshot:

1. From the Servers page, click the server on which the file system that you want to administer is located.

The File Systems Summary page is displayed.

2. Select the radio button next to the file system for which you want to schedule a metadata snapshot.

3. From the Operations menu, choose Take Metadata Snapshots.

The Take Metadata Snapshot pop-up window is displayed.

4. In the Fully Qualified Snapshot File field, type the path and the name of the snapshot file that you want to create.



Note - You must type the same path that is specified in the Snapshot File Path field on the Schedule Metadata Snapshot page for this file system. Otherwise, this snapshot file will not be displayed on the Restore File System page when you try to restore files for the file system.



5. Click Submit.

See the File System Manager online help file for complete information on creating metadata snapshots.


procedure icon  To Create a Sun StorEdge SAM-FS Metadata Dump File Manually Using the Command Line

1. Log in as root.

2. Go to the mount point for the samfs type file system mount point or to the directory that you are dumping.


# cd /sam1

See To Find Sun StorEdge QFS File Systems if needed.

3. Enter the samfsdump(1M) command to create a metadata dump file.

CODE EXAMPLE 4-2 shows a SAM-QFS file system metadata dump file being created on February 14, 2004 in a dumps subdirectory in dump file system /dump_sam1/dumps. The output of the ls(1) command line shows the date is assigned in the yymmdd format as the dump file's name, 040214.


CODE EXAMPLE 4-2 Creating a Metadata Dump File
# samfsdump -f /dump_sam1/dumps/`date +\%y\%m\%d`
# ls /dump_sam1/dumps
040214


procedure icon  To Create a Sun StorEdge SAM-FS Metadata Dump File Automatically From the File System Manager

Scheduling a metadata snapshot through the File System Manager interface is the equivalent of creating a crontab(1) entry that automates the Sun StorEdge SAM-FS software samfsdump(1M) process.

To schedule a metadata snapshot:

1. From the Servers page, click the server on which the archiving file system that you want to administer is located.

The File Systems Summary page is displayed.

2. Select the radio button next to the archiving file system for which you want to schedule a metadata snapshot.

3. From the Operations menu, choose Schedule Metadata Snapshots.

The Schedule Metadata Snapshots page is displayed.

4. Specify values on the Schedule Metadata Snapshots page.

For complete instructions on using this page, see the File System Manager online help file.

5. Click Save.


procedure icon  To Create a Sun StorEdge SAM-FS Metadata Dump File Automatically Using cron

1. Log in as root.

2. Enter the crontab(1M) command with the -e option to make an entry to dump the metadata for each file system.

The crontab entry in CODE EXAMPLE 4-3 runs at 10 minutes past 2 a.m. every day and does the following:



Note - Make the crontab entry on a single line. Because the line in the previous screen example is too wide for the page's format, it breaks into multiple lines.



If the crontab entry in the previous screen example ran on March 20, 2005, the full pathname of the dump file would be: /dump_sam1/dumps/050320.


Disaster Recovery Commands and Tools

TABLE 4-6 summarizes the commands used most frequently in disaster recovery efforts.


TABLE 4-6 Disaster Recovery Commands and Tools

Command

Description

Used By

qfsdump(1M)

Dumps Sun StorEdge QFS file system metadata and data.

Sun StorEdge QFS

qfsrestore(1M)

Restores Sun StorEdge QFS file system metadata and data.

Sun StorEdge QFS

samfsdump(1M)

Dumps SAM-QFS file system metadata.

SAM-QFS

samfsrestore(1M)

Restores SAM-QFS file system metadata.

SAM-QFS

star(1M)

Restores file data from archives.

SAM-QFS


For more information about these commands, see their man(1) pages. Other scripts and helpful sample files are located /opt/SUNWsamfs/examples or are available from Sun Microsystems.

TABLE 4-7 describes some disaster recovery utilities in the /opt/SUNWsamfs/examples directory and explains their purpose. You must modify all of the listed shell scripts, except for recover.sh(1M), to suit your configuration before using them. See the comments in the files.


TABLE 4-7 Disaster Recovery Utilities

Utility

Description

restore.sh(1M)

Executable shell script that stages all files and directories that were online at the time a samfsdump(1M) was taken. This script requires that a log file generated by samfsrestore(1M) be used as input. Modify the script as instructed in the comments in the script. See also the restore.sh(1M) man page.

recover.sh(1M)

Executable shell script that recovers files from tape, using input from the archiver log file. If used with SAM-Remote clients or server, the recovery must be performed on the server to which the tape library is attached. For more information about this script, see the recover.sh(1M) man page and the comments in the script itself. Also see Using Archiver Logs.

stageback.sh

Executable shell script that stages files that have been archived on accessible areas of a partially damaged tape. Modify the script as instructed in the script's comments. For how the script is used, see Damaged Tape Volume - No Other Copies Available.

tarback.sh(1M)

Executable shell script that recovers files from tapes by reading each tar(1) file. Modify the script as instructed in the script's comments. For more information about this script, see the tarback.sh man page. See also Unreadable Tape Label - No Other Copies Available.




caution icon

Caution - Improper use of the restore.sh, recover.sh, or tarback.shscripts can damage user or system data. Please read the man pages for these scripts before attempting to use them. For additional help with using these scripts, contact Sun customer support.




The samexplorer Script

The /opt/SUNWsamfs/sbin/samexplorer script (called info.sh in software versions before 4U1) is not a backup utility, but it should be run whenever changes are made to the system's configuration.

The samexplorer(1M) script creates a file containing all the configuration information needed for reconstructing a SAM-QFS installation from scratch if you ever need to rebuild the system. You can use the crontab(1) command with the -e option to create a cron(1M) job to run the samexplorer script at desired intervals.

The samexplorer script writes the reconfiguration information to /tmp/SAMreport.

Make sure that the SAMreport file is moved from the /tmp directory after creation to a fixed disk that is separate from the configuration files and outside the SAM-QFS environment. For more information about managing the SAMreport file, see the samexplorer(1M) man page.


What to Back Up and How Often

TABLE 4-8 describes the files that should be backed up and how often the files should be backed up onto a location outside the file system environment.

Where "Regularly" is shown in the "Backup Frequency" column, each site's system administrator should decide the appropriate intervals based on that site's requirements. Except where specified, use whatever backup procedures you choose.


TABLE 4-8 Which Files to Back Up and How Often

Data Type

Backup Frequency

Comments

Site-modified versions of filesystem backup and restoration shell scripts.

After modification.

See the default scripts listed in Disaster Recovery Commands and Tools.

Site-created shell scripts and cron(1) jobs created for backup and restoration.

After creation and after any modification.

 

SAMreport output from the samexplorer(1M) script.

At installation and after any configuration changes.

See the samexplorer script and SAMreport output file described in The samexplorer Script.

Sun StorEdge QFS metadata and data (see Metadata Used in Disaster Recovery for definitions).

Regularly

Files altered after qfsdump(1M) is run cannot be recovered by qfsrestore(1M), so take dumps frequently. For more information, see Metadata Used in Disaster Recovery.

SAM-QFS metadata (see Metadata Used in Disaster Recovery for definitions).

Regularly

Use the samfsdump(1M) command to back up metadata. Files altered after samfsdump is run cannot be recovered by samfsrestore(1M), so take dumps frequently or at least save the inodes information frequently. For more information, see Backing Up the Metadata in SAM-QFS File Systems.

SAM-QFS device catalogs.

Regularly

Back up all library catalog files, including the historian file.

Library catalogs for each automated library, each pseudolibrary on Sun SAM-Remote clients, and for the historian (for cartridges that reside outside the automated libraries) are in /var/opt/SUNWsamfs/catalog.

Archiver log files from a SAM-QFS file system where the archiver is being used.

Regularly

Specify a pathname and name for an archiver log file in the archiver.cmd file and back up the archiver log file. See the archiver.cmd(4) man page for how to specify an archiver log file for each file system. Also see Using Archiver Logs.

Configuration files and other similar files modified at your site. Note that these reside outside the SAM-QFS file system.

At installation and after any modification

The following files may be created at your site in the /etc/opt/SUNWsamfs directory:

archiver.cmd(4)

defaults.conf(4)

diskvols.conf(4)

hosts.fsname

hosts.fsname.local

mcf(4)

preview.cmd(4)

recycler.cmd(4)

releaser.cmd(4)

rft.cmd(4)

samfs.cmd(4)

stager.cmd(4)

Network-attached-library configuration files.

At installation and after any modification

If using network-attached libraries, make sure to back up the configuration files. The exact names of the files are listed in the Equipment Identifier field of the /etc/opt/SUNWsamfs/mcf file on each line that defines a network-attached robot. See the mcf(4) man page for more details.

Sun SAM-Remote configuration files.

At installation and after any modification

If using Sun SAM-Remote software, make sure to back up the configuration files. The exact names of the files are listed in the Equipment Identifier field of the /etc/opt/SUNWsamfs/mcf file on each line that defines a Sun SAM-Remote client or server. See the mcf(4) man page for more details.

Installation files.

At installation and after any modification

The following files are created by the software installation process. If you have made local modifications, preserve (or back up) these files:

/etc/opt/SUNWsamfs/inquiry.conf[1]

/opt/SUNWsamfs/sbin/ar_notify.sh*

/opt/SUNWsamfs/sbin/dev_down.sh*

/opt/SUNWsamfs/sbin/recycler.sh*

/kernel/drv/samst.conf*

/kernel/drv/samrd.conf

Files modified at installation time.

At installation and after any modification

The following files are modified as part of the software installation process:

/etc/syslog.conf

/etc/system

/kernel/drv/sd.conf*

/kernel/drv/ssd.conf*

/kernel/drv/st.conf*

/usr/kernel/drv/dst.conf*

Back the above files up so you can restore them if any of the files are lost or if the Solaris OE is reinstalled And if you modify the files, make sure to back them up again.

SUNWqfs and SUNWsamfs software packages and patches.

Once, shortly after downloading

The Sun StorEdge QFS and Sun StorEdge SAM software can be reinstalled easily from the release package and patches. Make sure you have a record of the revision level of the currently running software.

If the software is on a CD-ROM, store the CD-ROM in a safe place.

If you download the software from the Sun Download Center, back up the downloaded package(s) and patches. This saves time if you have to reinstall the software because you avoid having to download a fresh copy if you lose data.

Solaris OS and patches; and unbundled patches.

At installation

The Solaris OE can be reinstalled easily from the CD-ROM, but make sure you have a record of all installed patches. This information is captured in the SAMreport file generated by the samexplorer(1M) script, which is described under The samexplorer Script. This information is also available from the Sun Explorer tool.



Additional Backup Considerations

The following is a list of questions to also consider when preparing your site's disaster recovery plan.

TABLE 4-9 compares the types of dumps that are done in the various file system types.


TABLE 4-9 Types of Dumps Performed on Sun StorEdge QFS Compared to SAM-QFS File Systems

File System Type

Dump Command Output

Notes

Sun StorEdge QFS

A qfsdump(1M) command generates a dump of both metadata and data.

See the Sun StorEdge QFS Installation and Upgrade Guide for how to back up Sun StorEdge QFS metadata.

SAM-QFS

The samfsdump(1M) command without the -u option generates a metadata dump file.

A metadata dump file is relatively small, so you should be able to store many more metadata dump files than data dump files. Restoration of the output of samfsdump without the -u option is quicker, because the data is not restored until accessed by a user.

 

The samfsdump(1M) command with the -u option dumps file data for files that do not have a current archive copy.

The dump files are substantially larger, and the command takes longer to complete. However, restoration of the output from samfsdump with -u restores the file system back to its state when the dump was taken.


Retain enough data and metadata to ensure that you can restore the file systems according to your site's needs. The appropriate number of dumps to save depends, in part, on how actively the system administrator monitors the dump output. If an administrator is monitoring the system daily to make sure the samfsdump(1M) or qfsdump(1M) dumps are succeeding (making sure enough tapes are available and investigating dump errors), then keeping a minimum number of dump files to cover vacations, long weekends, and other absences might be enough.

If your site is using the sam-recycler(1M) command to reclaim space on archive media, it is critical that you make metadata copies after sam-recycler has completed its work. If a metadata dump is created before the sam-recycler exits, the information in the metadump about archive copies becomes out of date as soon as sam-recyler runs. Also, some archive copies may be made inaccessible because the sam-recycler command may cause archive media to be relabeled.

Check root's crontab(1) entry to find out if and when the sam-recycler command is being run, and then, if necessary, schedule the creation of metadump files around the sam-recycler execution times. For more about recycling, see the Sun StorEdge SAM-FS Storage and Archive Management Guide.

Off-site data storage is an essential part of a disaster recovery plan. In the event of a disaster, the only safe data repository might be an offsite vault. Beyond the recommended two copies of all files and metadata that you should be keeping in house as a safeguard against media failure, consider making a third copy on removable media and storing it offsite.

Sun SAM-Remote offers you the additional alternative of making archive copies in remote locations on a LAN or WAN. Multiple Sun SAM-Remote servers can be configured as clients to one another in a reciprocal disaster recovery strategy.

If you need to restore all files that were online, you need to run the samfsrestore command with the -g option.

The log file generated by the samfsrestore command's -g option contains a list of all files that were on the disk when the samfsdump(1M) command was run. This log file can be used in conjunction with the restore.sh shell script to restore the files on disk to their predisaster state. The restore.sh script takes the log file as input and generates stage requests for files listed in the log. By default, the restore.sh script restores all files listed in the log file.

If your site has thousands of files that need to be staged, consider splitting the log file into manageable chunks and running the restore.sh script against each of those chunks separately to ensure that the staging process does not overwhelm the system. You can also use this approach to ensure that the most critical files are restored first. For more information, see the comments in /opt/SUNWsamfs/examples/restore.sh.


Using Archiver Logs

Archiver logging should be enabled in the archiver.cmd(4) file. Because archiver logs list all the files that have been archived and their locations on cartridges, archiver logs can be used to recover lost files that were archived since the last set of metadata dumps and backup copies were created.

Be aware of the following considerations:

Set up and manage the archive logs by performing the following procedures:


procedure icon  To Set Up Archiver Logging

single-step bulletEnable archive logging in the archiver.cmd file (in the /etc/opt/SUNWsamfs directory).

See the archiver.cmd(4) man page. The archiver log files are typically written to /var/adm/logfilename. The directory where you direct the logs to be written should reside on a disk outside the SAM-QFS environment.


procedure icon  To Save Archiver Logs

single-step bulletEnsure that archiver log files are cycled regularly by creating a cron(1M) job that moves the current archiver log files to another location.

The screen example below shows how to create a dated copy of an archiver log named /var/adm/archlog every day at 3:15 a.m. The dated copy is stored in /var/archlogs.



Note - If you have multiple archiver logs, create a crontab entry for each one.




# crontab -e

15 3 * * 0 (mv /var/adm/archlog /var/archlogs/`date +%y%m%d`; touch /var/adm/archlog)

:wq



How and Where to Keep Copies of Disaster Recovery Files and Metadata

Consider writing scripts to create tar(1) files that contain copies of all the relevant disaster recovery files and metadata described in this chapter and to store the copies outside the file system. Depending on your site's policies, put the files into one or more of the locations described in the following list:

For information on removable media files, see the request(1) man page.

This approach ensures that the disaster recovery files and metadata are archived separately from file system to which they apply. You might also consider archiving multiple backup copies for additional redundancy.

Observe the following precautions:

You can obtain lists of all directories containing removable media files by using the sls(1M) command. These listings can be emailed. For more information about obtaining file information, see the sls(1M) man page.


1 (Footnote) Protect this file only if you modify it.