StorageTek Storage Archive Manager and StorageTek QFS Software Installation and Configuration Guide Release 5.4 E42062-02 |
|
Previous |
Next |
QFS File Systems are the basic building blocks of all SAM-QFS solutions. Used on their own, they offer high performance, effectively unlimited capacity, and support for extremely large files. When used with Storage Archive Manager and suitably configured archival storage, they become SAM-QFS Archiving File Systems. Both archiving and non-archiving QFS file systems can then form the basis of more complex, multiple-host and high-availability configurations. So this chapter outlines the basic tasks involved when creating and configuring them:
Creating and configuring a basic QFS file system is straightforward. In each case, you perform the following tasks:
Prepare the disk devices that will support the file system.
Create a Master Configuration File (mcf
).
Create the file system using the /opt/SUNWsamfs/sbin/sammkfs
command.
Add the new file system to the host's virtual file system configuration by editing the /etc/vfstab
file.
Mount the new file system.
The process can be performed using either the graphical File System Manager interface or a text editor and commandline terminal. But in the examples, we use the editor-and-commandline method to make the parts of the process explicit and thus easier to understand.
For simplicity and convenience during an initial SAM-QFS configuration, the procedures in this section set file-system mount options in the configuration file for the Solaris virtual file system, /etc/vfstab
. But most options can also be set in an optional /etc/opt/SUNWsamfs/samfs.cmd
file or from the command line. See the samfs.cmd
and mount_samfs
man pages for details.
Before you start the configuration process, select the disk resources required for your planned configuration. You can use raw device slices, ZFS zvol
volumes, or Solaris Volume Manager volumes.
ms
File SystemLog in to the file-system host as root
. Log in to the global zone if the host is configured with zones.
root@solaris:~#
Create the file /etc/opt/SUNWsamfs/mcf
.
The mcf
(master configuration file) is a table of six columns separated by white space, each representing one of the parameters that define a QFS file system: Equipment Identifier
, Equipment Ordinal
, Equipment Type
. Family Set
, Device State
, and Additional Parameters
. The rows in the table represent file-system equipment, which includes both storage devices and groups of devices (family sets).
You can create the mcf
file by selecting options in the SAM-QFS File System Manager graphical user interface or by using a text editor. In the example below, we use the vi
text editor:
root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
~ ~ "/etc/opt/SUNWsamfs/mcf" [New File]
For the sake of clarity, enter column headings as comments.
Comment rows start with a hash sign (#
):
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ----------------
In the Equipment Identifier
field (the first column) of the first row, enter the name of the new file system.
In this example, the file system is named qfsms
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qfsms
In the Equipment Ordinal
field (the second column), enter a number that will uniquely identify the file system.
The equipment ordinal number uniquely identifies all equipment controlled by SAM-QFS. In this example, we use 100
for the qfsms
file system:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qfsms 100
In the Equipment Type
field (the third column), enter the equipment type for a general-purpose QFS file system, ms
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qfsms 100 ms
In the Family Set
field (the fourth column), enter the name of the file system.
The Family Set
parameter defines a group of equipment that are configured together to form a unit, such as a robotic tape library and its resident tape drives or a file system and its component disk devices.
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qfsms 100 ms qfsms
Enter on
in the Device State
column, and leave the Additional Parameters
column blank.
This row is complete:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qfsms 100 ms qfsms on
Start a new row. Enter the identifier for one of the disk devices that you selected in the Equipment Identifier
field (the first column), and enter a unique number in the Equipment Ordinal
field (the second column).
In the example, we indent the device line to emphasize the fact that the device is part of the qfsms
file-system family set and increment the equipment number of the family set to create the device number, in this case 101
:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------------- qfsms 100 ms qfsms on/dev/dsk/c1t3d0s3
101
In the Equipment Type
field of the disk device row (the third column), enter the equipment type for a disk device, md
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qfsms 100 ms qfsms on
/dev/dsk/c1t3d0s3 101 md
Enter the family set name of the file system in the Family Set
field of the disk device row (the fourth column), enter on
in the Device State
field (the fifth column), and leave the Additional Parameters
field (the sixth column) blank.
The family set name qfsms
identifies the disk equipment as part of the hardware for the file system.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------------- qfsms 100 ms qfsms on /dev/dsk/c1t3d0s3 101 mdqfsms
on
Now add entries for any remaining disk devices, save the file, and quit the editor.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------------- qfsms 100 ms qfsms on /dev/dsk/c1t3d0s3 101 md qfsms on/dev/dsk/c1t4d0s5
102
md
qfsms
on
:wq
root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
root@solaris:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device:
root@solaris:~# sam-fsdProblem in mcf file
/etc/opt/SUNWsamfs/mcf for filesystem qfsms sam-fsd:Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed a letter o
instead of a 0 in the slice number part of the equipment name for device 102
, the second md
device:
qfsms 100 ms qfsms on
/dev/dsk/c0t0d0s0 101 md qfsms on
/dev/dsk/c0t3d0so
102 md qfsms on
If the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
The example is a partial listing of error-free output:
root@solaris:~# sam-fsd Trace file controls: sam-amld /var/opt/SUNWsamfs/trace/sam-amld cust err fatal ipc misc proc date size 10M age 0 sam-archiverd /var/opt/SUNWsamfs/trace/sam-archiverd cust err fatal ipc misc proc date module size 10M age 0 sam-catserverd /var/opt/SUNWsamfs/trace/sam-catserverd cust err fatal ipc misc proc date module size 10M age 0 ... Would start sam-archiverd() Would start sam-stagealld() Would start sam-stagerd() Would start sam-amld()
Create a mount-point directory for the new file system, and set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /qfsms
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
root@solaris:~# mkdir/qfsms
root@solaris:~#chmod 755 /qfsms
Tell the SAM-QFS software to reread the mcf
file and reconfigure itself accordingly. Use the command samd config
.
root@solaris:~# samd config
If the command samd config
fails with the message You need to run /opt/SUNWsamfs/util/SAM-QFS-post-install
, you forgot to run the post-installation script when you installed the software. Run it now.
root@solaris:~# /opt/SUNWsamfs/util/SAM-QFS-post-install - The administrator commands will be executable by root only (group bin). If this is the desired value, enter "y". If you want to change the specified value enter "c". ... root@solaris:~#
Create the file system using the /opt/SUNWsamfs/sbin/sammkfs
command and the family set name of the file system.
The SAM-QFS software uses dual-allocation and default Disk Allocation Unit (DAU) sizes for md
devices. This is a good choice for a general-purpose file system, because it can accommodate both large and small files and I/O requests. In the example, we accept the defaults:
root@solaris:~#sammkfs
qfsms
Building 'qfsms' will destroy the contents of devices: /dev/dsk/c1t3d0s3 /dev/dsk/c1t4d0s5 Do you wish to continue? [y/N]yes
total data kilobytes = ...
If we were using mr
devices needed to specify a non-default DAU size that better met our I/O requirements, we could do so by using the sammkfs
command with the -a
option:
root@solaris:~# sammkfs -a 16 qfs2ma
For additional information, see the sammkfs
man page.
Back up the operating system's /etc/vfstab
file.
root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Add the new file system to the operating system's virtual file system configuration. Open the file in a text editor, and start a line for the qfsms
family set device:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
qfsms - /qfsms samfs -
In the sixth column of the /etc/vfstab
file, Mount at Boot
, enter no
in most cases.
root@solaris:~# vi /etc/vfstab
# File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
...
qfsms - /qfsms samfs - no
To specify round-robin allocation, add the stripe=0
mount option:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
qfsms - /qfsms samfs - no stripe=0
To specify striped allocation, add the stripe=
stripe-width
mount option, where stripe-width
is the number of Disk Allocation Units (DAUs) that should be written to each disk in the stripe.
In our example, we set the stripe width to one DAU:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
qfsms - /qfsms samfs - no stripe=1
Here, the stripe=1
option specifies a stripe width of 1 DAU and a write size of two DAUs. So, when the file system writes two DAUs at a time, it writes one to each of the two md
disk devices in the qfsms
family set.
Make any other desired changes to the /etc/vfstab
file.
For example, to mount the file system in the background if the metadata server is not responding, you would add the bg
mount option to the Mount Options
field:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
qfsms - /qfsms samfs - no stripe=1,bg
Save the vfstab
file, and close the editor.
...
qfsms - /qfsms samfs - no stripe=1
:wq
root@solaris:~#
Mount the new file system:
root@solaris:~# mount /qfsms
The file system is now complete and ready to use.
Where to go from here:
If you are using Storage Archive Manager to set up an archiving file system, see "Configuring SAM-QFS Archiving File Systems"
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
ma
File SystemOnce the SAM-QFS software is installed on the file-system host, you configure an ma
file system as described below.
Log in to the file system host as root
. Log in to the global zone if the host is configured with zones.
root@solaris:~#
Select the disk devices that will hold the metadata.
Select the disk devices that will hold the data.
Create the mcf
file.
You can create the mcf
file by selecting options in the SAM-QFS File System Manager graphical user interface or by using a text editor. In the example below, we use the vi
text editor:
root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
~ "/etc/opt/SUNWsamfs/mcf" [New File]
For the sake of clarity, enter column headings as comments.
Comment rows start with a hash sign (#
):
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------ ------ -----------------
Create an entry for the file-system family set.
In this example, we identify the file system as qfsma
, increment the equipment ordinal to 200
, set the equipment type to ma
, set the family set name to qfsma
, and set the device state on
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- ------ ------ -----------------
qfsma 200 ma qfsma on
Add an entry for each metadata device. Enter the identifier for the disk device you selected in the equipment identifier column, set the equipment ordinal, and set the equipment type to mm
.
Add enough metadata devices to hold the metadata required for the size of the file system. In the example, we add a single metadata device:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- ------ ------ -----------------
qfsma 200 ma qfsma on
/dev/dsk/c0t0d0s0 201 mm qfsma on
Now add entries for the data devices, save the file, and quit the editor.
These can be either md
, mr
, or striped-group (gXXX
) devices. For this example, we will specify md
devices:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------ ------ ----------------- qfsma 200 ma qfsma on /dev/dsk/c0t0d0s0 201 mm qfsma on/dev/dsk/c0t3d0s0 202 md qfsma on
/dev/dsk/c0t3d0s1 203 md qfsma on
:wq
root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
root@solaris:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device:
root@solaris:~# sam-fsdProblem in mcf file
/etc/opt/SUNWsamfs/mcffor filesystem qfsma
sam-fsd:Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed an exclamation point (!
) instead of a 1 in the slice number part of the equipment name equipment name for device 202
, the first md
device:
sharefs1 200 ma qfsma on
/dev/dsk/c0t0d0s0 201 mm qfsma on
/dev/dsk/c0t0d0s!
202 md qfsma on
/dev/dsk/c0t3d0s0 203 md qfsma on
If the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
The example is a partial listing of error-free output:
root@solaris:~# sam-fsd Trace file controls: sam-amld /var/opt/SUNWsamfs/trace/sam-amld cust err fatal ipc misc proc date size 10M age 0 sam-archiverd /var/opt/SUNWsamfs/trace/sam-archiverd cust err fatal ipc misc proc date module size 10M age 0 sam-catserverd /var/opt/SUNWsamfs/trace/sam-catserverd cust err fatal ipc misc proc date module size 10M age 0 ... Would start sam-archiverd() Would start sam-stagealld() Would start sam-stagerd() Would start sam-amld()
Create the file system using the /opt/SUNWsamfs/sbin/sammkfs
command and the family set name of the file system.
In the example, we create the file system using the default Disk Allocation Unit (DAU) size for ma
file systems with md
devices, 64
kilobytes:
root@solaris:~#sammkfs
qfsma
Building 'qfsma' will destroy the contents of devices: /dev/dsk/c0t0d0s0 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s1 Do you wish to continue? [y/N]yes
total data kilobytes = ...
The default is a good, general-purpose choice. But if the file system were to primarily support smaller files or applications that read and write smaller amounts of data, we could also specify a DAU size of 16
or 32
kilobytes. To specify a 16-kilobytes DAU, we would use the sammkfs
command with -a
option:
root@solaris:~# sammkfs -a 16 qfsma
The DAU for mr
devices and g
XXX
striped groups is fully adjustable within the range 8-65528
kilobytes, in increments of 8 kilobytes. The default is 64
kilobytes for mr
devices and 256
kilobytes for g
XXX
striped groups. See the sammkfs
man page for additional details.
Back up the operating system's /etc/vfstab
file.
root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Add the new file system to the operating system's virtual file system configuration. Open the /etc/vfstab
file in a text editor, and start a line for the qfsma
family set.
root@solaris:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... qfsma - /qfsma samfs -
In the sixth column of the /etc/vfstab
file, Mount at Boot
, enter no
.
root@solaris:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... qfsma - /qfsma samfs - no
To specify round-robin allocation, add the stripe=0
mount option:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
...
qfsma - /qfsma samfs - no stripe=0
To specify striped allocation, add the stripe=
stripe-width
mount option, where stripe-width
is an integer in the range [1-255]
that represents the number of Disk Allocation Units (DAUs) that should be written to each disk in the stripe.
When striped allocation is specified, data is written to devices in parallel. So, for best performance, choose a stripe width that fully utilizes the bandwidth available with your storage hardware. Note that the volume of data transferred for a given stripe width depends on how hardware is configured. For md
devices implemented on single disk volumes, a stripe width of 1
writes one 64-kilobyte DAU to each of two disks for a total of 128 kilobytes. For md
devices implemented on 3+1 RAID 5 volume groups, the same stripe width transfers one 64-kilobyte DAU to each of the three data disks on each of two devices, for a total of six DAUs or 384 kilobytes per transfer. In our example, we set the stripe width to one DAU:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
...
qfsma - /qfsma samfs - no stripe=1
You can try adjusting the stripe width to make better use of the available hardware. In the Mount Options
field for the file system, set the stripe=
n
mount option, where n
is a multiple of the DAU size specified for the file system. Test the I/O performance of the file system and readjust the setting as needed.
When you set stripe=0
, SAM-QFS writes files to devices using round-robin allocation. Each file is completely allocated on one device until that device is full. Round-robin is preferred for shared file systems and multistream environments.
In the example, we have determined that the bandwidth of our RAID-5 volume groups are under-utilized with a stripe width of one, so we try stripe=2
:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...qfsma
- /qfsma samfs - no ...,stripe=2
Otherwise, save the vfstab
file.
...
qfsma - /qfsma samfs - no stripe=1
:wq
root@solaris:~#
Mount the new file system:
root@solaris:~# mount /qfsms
The basic file system is now complete and ready to use.
If you are using Storage Archive Manager to set up an archiving file system, see "Configuring SAM-QFS Archiving File Systems".
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Archiving file systems combine one or more QFS ma
- or ms
-type file systems with archival storage and Oracle StorageTek Storage Archive Manager (SAM) software. The SAM software integrates secondary disk storage and/or removable media into the basic file-system operations, so that files are maintained in multiple copies on varied media. This redundancy provides continuous data protection and supports policy-driven retention and efficient storage of extremely large files.
SAM-QFS archiving file systems can copy files from the primary file-system disk cache to either tape volumes or disk-based file systems that have been configured as disk archives. In the latter case, SAM-QFS uses each file system more or less as it would a tape cartridge and addresses it using an assigned volume serial number (VSN). Disk-archive volumes can be significantly more responsive when small files are frequently archived, re-accessed, and/or modified, because random-access disk devices do not incur the mounting and positioning overhead associated with sequential-access tape devices.
Determine the number of file systems that you are likely to need. For best performance, one SAM-QFS operation should read or write to one disk volume at a time, as with tape volumes. So the number of required volumes depends of the workload that you identified when gathering and defining requirements.
In typical deployments, a number between 15 and 30 volumes is usually about right.
Identify the disk resources and total capacity that can be made available for disk archiving.
Calculate the number of disk volumes that you can actually create from the available resources. Allow 10 to 20 terabytes per volume. If the total available capacity is less than 10 terabytes, you can create a single archive volume.
Configure a file system for each archive volume.
You can use any combination of local or NFS-mounted, QFS, ZFS, and/or UFS file systems as archive volumes (NFS-mounted volumes can be particularly useful for creating off-site archive copies).
Do not try to use subdirectories of a single file system as archival volumes. If multiple volumes are defined on a single set of physical devices, multiple SAM-QFS operations will contend for the same resources. This situation can drastically increase disk overhead and severely reduce performance.
For the examples in this section, we create fifteen file systems:
DISKVOL1
is a local QFS file system that we create specifically for use as archival storage.
DISKVOL2
to DISKVOL15
are UFS file systems mounted on a remote server named server
.
If you configure one or more QFS file systems as archival storage volumes, assign each a family set name and a range of equipment ordinal numbers that clearly identifies it as an archival storage volume.
Clearly distinguishing the QFS archival storage file system from other SAM-QFS primary file systems makes configuration easier to understand and maintain. In this example, the new file system DISKVOL1
indicates its function. In the mcf
file, this name and the equipment ordinal 800
will distinguish the disk archive from samms
and 100
, the family set name and ordinal number that we will use when we create an archiving SAM-QFS file system in subsequent examples:
# Archiving file systems: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ----------------- # Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ -----------------DISKVOL1
800
msDISKVOL1
on /dev/dsk/c6t0d1s7 801 md DISKVOL1 on /dev/dsk/c4t0d2s7 802 md DISKVOL1 on
On the SAM-QFS host, create a single parent directory to hold the mount points for the archival disk volumes, much as a physical tape library holds archival tape volumes.
In the example, we create the directory/diskvols
.
root@solaris:~# mkdir /diskvols
In the parent directory, create a mount-point directory for each archival file system.
In the example, we create the mount-point directories DISKVOL1
and DISKVOL2
to DISKVOL15
:
root@solaris:~#mkdir /diskvols/DISKVOL1
root@solaris:~#mkdir /diskvols/DISKVOL2
... root@solaris:~#mkdir /diskvols/DISKVOL15
On the SAM-QFS host, back up the /etc/vfstab
file. Then open it in an editor, add entries for each archival file system, and add the mount option nosam
to each QFS file system. Save the file, and close the editor.
The nosam
mount option makes sure that archival copies stored on a QFS file system are not themselves archived.
In the example, we use the vi
editor to add entries for DISKVOL1
and DISKVOL2
to DISKVOL15
.
root@solaris:~#cp /etc/vfstab /etc/vfstab.backup
root@solaris:~#vi /etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------------ ------ ---- ------- --------- /devices - /devices devfs - no - ...DISKVOL1
-/diskvols/DISKVOL1
samfs
- yesnosam
server:/DISKVOL2
-/diskvols/DISKVOL2
nfs
- yesserver:/DISKVOL3
-/diskvols/DISKVOL3
nfs
- yes ...server:/DISKVOL15
-/diskvols/DISKVOL15
nfs
- yes:wq
root@solaris:~#
On the SAM-QFS host, mount the archival file system(s).
In the example, we mount DISKVOL1
and DISKVOL2
to DISKVOL15
:
root@solaris:~#mount
/diskvols/DISKVOL1
root@solaris:~#mount
/diskvols/DISKVOL2
... root@solaris:~#mount
/diskvols/DISKVOL15
This section addresses the following tasks:
If you have an Oracle StorageTek ACSLS network-attached library, you can configure it as follows or you can use the SAM-QFS Manager graphical user interface to automatically discover and configure the library (for instructions on using SAM-QFS Manager, see the online help).
Proceed as follows:
Log in to the SAM-QFS server host as root
.
root@solaris:~#
Change to the /etc/opt/SUNWsamfs
directory.
root@solaris:~#cd
/etc/opt/SUNWsamfs
In a text editor, start a new file with a name that corresponds to the type of network-attached library that you are configuring.
In the example, we start a parameters file for an Oracle StorageTek ACSLS network-attached library:
root@solaris:~#vi /etc/opt/SUNWsamfs/acsls1params
# Configuration File for an ACSLS Network-Attached Tape Library 1
Enter the parameters and values that the SAM-QFS software will use when communicating with the ACSLS-attached library.
The SAM-QFS software uses the following Oracle StorageTek Automated Cartridge System Application Programming Interface (ACSAPI) parameters to control ACSLS-managed libraries (for more information, see the stk
man page):
access=
user-id
specifies an optional user identification value for access control. By default, there is no user identification-based access control.
hostname=
hostname
specifies the hostname of the server that runs the StorageTek ACSLS interface.
portnum=
portname
specifies the port number that is used for communication between ACSLS and SAM-QFS software.
ssihost=
hostname
specifies the hostname that identifies a multihomed SAM-QFS server to the network that connects to the ACSLS host. The default is the name of the local host.
ssi_inet_port=
ssi-inet-port
specifies the fixed firewall port that the ACSLS Server System Interface must use for incoming ACSLS responses. Specify either 0
or a value in the range [1024-65535
]. The default, 0
, allows dynamic port allocation.
csi_hostport=
csi-port
specifies the Client System Interface port number on the ACSLS server to which the SAM-QFS sends its ACSLS requests. Specify either 0
or a value in the range [1024-65535
]. The default, 0
, causes the system to query the port mapper on the ACSLS server for a port.
capid=(acs=
acsnum
,
lsm=
lsmnum
,
cap=
capnum
)
specifies the ACSLS address of a cartridge access port (CAP), where acsnum
is the Automated Cartridge System (ACS) number for the library, lsmnum
is the Library Storage Module (LSM) number for the module that holds the CAP, and capnum
is the identifying number for the desired CAP. The complete address is enclosed in parentheses.
capacity=(
index-value-list
)
specifies the capacities of removable media cartridges, where index-value-list
is a comma-delimited list ofindex
=
value
pairs. Eachindex
in the list is the index of an ACSLS-defined media type and each value
is the corresponding volume capacity in units of 1024 bytes.
The ACSLS file /export/home/ACSSS/data/internal/mixed_media/media_types.dat
defines the media-type indices. In general, you only need to supply a capacity entry for new cartridge types or when you need to override the supported capacity.
device-path-name
=
(
acs=
ACSnumber
,
lsm=
LSMnumber
,
panel=
Panelnumber
,
drive=
Drivenumber
)
[
shared
]
specifies the ACSLS address of a drive that is attached to the client, where device-path-name
identifies the device on the SAM-QFS server, acsnum
is the Automated Cartridge System (ACS) number for the library, lsmnum
is the Library Storage Module (LSM) number for the module that controls the drive, Panelnumber
is the identifying number for the panel where the drive is installed, and Drivenumber
is the identifying number of the drive. The complete address is enclosed in parentheses.
Adding the optional shared
keyword after the ACSLS address lets two or more SAM-QFS servers share the drive as long as each retains exclusive control over its own media. By default, a cartridge in a shared drive can be idle for 60 seconds before being unloaded.
In the example, we identify acslserver1
as the ACSLS host, limit access to sam_user
, specify dynamic port allocation, and map a cartridge access port and two drives:
root@solaris:~# vi /etc/opt/SUNWsamfs/acsls1params # Configuration File for an ACSLS Network-Attached Tape Library 1hostname = acslserver1
portnum = 50014
access = sam_user
ssi_inet_port = 0
csi_hostport = 0
capid = (acs=0, lsm=1, cap=0)
/dev/rmt/0cbn = (acs=0, lsm=1, panel=0, drive=1)
/dev/rmt/1cbn = (acs=0, lsm=1, panel=0, drive=2)
Save the file and close the editor.
root@solaris:~# vi /etc/opt/SUNWsamfs/acsls1params
# /etc/opt/SUNWsamfs/acslibrary1
# Configuration File for an ACSLS Network-Attached Tape Library
...
/dev/rmt/0cbn = (acs=0, lsm=1, panel=0, drive=1)
/dev/rmt/1cbn = (acs=0, lsm=1, panel=0, drive=2)
:wq
root@solaris:~#
If required, Configure Labeling Behavior for Barcoded Removable Media or Set Drive Timing Values.
Otherwise, go to "Configure the Archiving File System".
If you have a tape library that uses a barcode reader, you can configure SAM-QFS to base volume labels on the barcodes by using the labels
directive in the defaults.conf
file. Proceed as follows:
Log in to the SAM-QFS host as root
.
root@solaris:~#
If you need the library to automatically label each volume using the first six characters of the barcode on the media and have not changed the defaults, stop here. If required Set Drive Timing Values. Otherwise, go to "Configure the Archiving File System".
By default, if a library holds a barcode reader and barcoded media, SAM-QFS software automatically labels the volumes with the first six characters in the barcode.
If you require a non-default behavior or if you have previously overridden the default, open the file /etc/opt/SUNWsamfs/defaults.conf
in a text editor.
In the example, we open the file in the vi
editor:
root@solaris:~#vi
/opt/SUNWsamfs/examples/defaults.conf
...
Locate the directive line labels
=
, if present add it if it is not present.
In the example, we add the directive:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. ...labels
=
To re-enable the default, automatic labeling based on the first six characters of the barcode, set the value of the labels
directive to barcodes
. Save the file, and close the editor.
The SAM-QFS software now automatically relabels an unlabeled tape using the first six characters of the tape's barcode as the label:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf ... labels =barcodes
:wq
root@solaris:~#
To enable automatic labeling based on the last six characters of the barcode on a tape, set the value of the labels
directive to barcodes_low
. Save the file, and close the editor.
When the labels
directive is set to barcodes_low
, the SAM-QFS software automatically relabels an unlabeled tape using uses the last six characters of the tape's barcode as the label:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf ... labels =barcodes_low
:wq
root@solaris:~#
To disable automatic labeling and configure SAM-QFS to read labels from tapes, set the value of the labels
directive to read
. Save the file, and close the editor.
When the labels
directive is set to the value read
, the SAM-QFS software cannot automatically relabel tapes:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf ... labels =read
idle_unload = 0 ...:wq
root@solaris:~#
If required, Set Drive Timing Values.
Otherwise, go to "Configure the Archiving File System".
By default, the SAM-QFS software sets drive timing parameters as follows:
The minimum time that must elapse before a specified device type can dismount media is 60
seconds.
The amount of time that SAM-QFS software waits before issuing new commands to a library that is responding to a SCSI unload
command is 15
seconds.
The amount of time that SAM-QFS software waits before unloading an idle drive is 600
seconds (10 minutes).
The amount of time that SAM-QFS software waits before unloading an idle drive that is shared by two or more SAM-QFS servers is 600
seconds (10 minutes).
To change the default timing values, proceed as follows:
If you are not logged in, log in to the SAM-QFS host as root
.
root@solaris:~#
Open the /etc/opt/SUNWsamfs/
defaults.conf
file in a text editor.
In the example, we use the vi
editor:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character and change the value.
...
If required, specify the minimum time that must elapse before a specified device type can dismount media. In the defaults.conf
file, add a directive of the form equipment-type
_delay
=
number-of-seconds
, where equipment-type
is the two-character, SAM-QFS code that identifies the drive type that you are configuring and number-of-seconds
is an integer representing the default number of seconds for this device type.
See Appendix A, "Glossary of Equipment Types" for listings of equipment type codes and corresponding equipment. In the example, we change the unload delay for LTO drives (equipment type li
) from the default value (60
seconds) to 90
seconds):
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ...li
_delay
=
90
If required, specify the amount of time that SAM-QFS software waits before issuing new commands to a library that is responding to a SCSI unload
command. In the defaults.conf
file, add a directive of the form equipment-type
_unload
=
number-of-seconds
, where equipment-type
is the two-character, SAM-QFS code that identifies the drive type that you are configuring and number-of-seconds
is an integer representing the number of seconds for this device type.
See Appendix A, "Glossary of Equipment Types") for listings of equipment type codes and corresponding equipment. Set the longest time that the library might need when responding to the unload
command in the worst-case. In the example, we change the unload delay for LTO drives (equipment type li
) from the default value (15
seconds) to 35
seconds:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... li_delay = 90li
_unload
=
35
If required, specify the amount of time that SAM-QFS software waits before unloading an idle drive. In the defaults.conf
file, add a directive of the form idle_unload
=
number-of-seconds
, where number-of-seconds
is an integer representing the specified number of seconds.
Specify 0
to disable this feature. In the example, In the example, we disable this feature by changing the default value (600
seconds) to 0
:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... li_delay = 90 li_unload = 35idle_unload
=
0
If required, specify the amount of time that SAM-QFS software waits before unloading a shared idle drive. In the defaults.conf
file, add a directive of the form shared_unload
=
number-of-seconds
, where number-of-seconds
is an integer representing the specified number of seconds.
You can configure SAM-QFS servers to share removable-media drives. This directive frees drives for use by other servers when the server that owns the loaded media is not actually using the drive. Specify 0
to disable this feature. In the example, we disable this feature by changing the default value (600
seconds) to 0
:
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... idle_unload = 600 li_delay = 90 li_unload = 35 idle_unload = 0shared_unload
=
0
Save the file, and close the editor.
root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character and change the value.
...
idle_unload = 600
li_delay = 90
li_unload = 35
idle_unload = 0
shared_unload = 0
:wq
root@solaris:~#
Otherwise, Configure the Archiving File System.
The procedure for creating an archiving file system is identical to creating a non-archiving file system, except that we add devices for storing additional copies of the data files:
Start by configuring a QFS file system. You can Configure a General-Purpose ms
File System or Configure a General-Purpose ms
File System.
While you can use the SAM-QFS File System Manager graphical user interface to create file systems, for the examples in this section, we use the vi
editor. Here, we create a general purpose, ms
file system with the family set name samms
and the equipment ordinal number 100
:
root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Archiving file systems: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ -------------------samms 100 ms samms on
/dev/dsk/c1t3d0s3 101 md samms on
/dev/dsk/c1t3d0s4 102 md samms on
To add archival tape storage, start by adding an entry for the library. In the equipment identifier field, enter the device ID for the library and assign an equipment ordinal number:
In this example, the library equipment identifier is /dev/scsi/changer/c1t0d5
. We set the equipment ordinal number to 900
, the range following the range chosen for our disk archive:
# Archival storage for copies:
#
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- --------- ------ -----------------
DISKVOL1 800 ms DISKVOL1 on
/dev/dsk/c6t0d1s7 801 md DISKVOL1 on
/dev/dsk/c4t0d2s7 802 md DISKVOL1 on
/dev/scsi/changer/c1t0d5 900
Set the equipment type to rb
, a generic SCSI-attached tape library, provide a name for the tape library family set, and set the device state on
.
In this example, we are using the library library1
:
# Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ----------------- DISKVOL1 800 ms DISKVOL1 on /dev/dsk/c6t0d1s7 801 md DISKVOL1 on /dev/dsk/c4t0d2s7 802 md DISKVOL1 on /dev/scsi/changer/c1t0d5 900rb
library1
on
Optionally, in the Additional Parameters
column, enter the path where the library catalog will be stored.
If you do not opt to supply a catalog path, the software will set a default path for you.
Note that, due to document layout limitations, the example abbreviates the long path to the library catalog var/opt/SUNWsamfs/catalog/library1cat
:
# Archival storage for copies:
#
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- --------- ------ -----------------
DISKVOL1 800 ms DISKVOL1 on
/dev/dsk/c6t0d1s7 801 md DISKVOL1 on
/dev/dsk/c4t0d2s7 802 md DISKVOL1 on
/dev/scsi/changer/c1t0d5 900 rb library1 on ...catalog/library1cat
Next, add an entry for each tape drive that is part of the library family set. Add each drive in the order in which it is physically installed in the library.
Follow the drive order listed in the drive-mapping file that you created in "Determine the Order in Which Drives are Installed in the Library". In the example, the drives attached to Solaris at /dev/rmt/1
, /dev/rmt/0
, /dev/rmt/2
, and /dev/rmt/3
are, respectively, drives 1
, 2
, 3
, and 4
in the library. So /dev/rmt/1
is listed first in the mcf
file, as device 901
. The tp
equipment type specifies a generic SCSI-attached tape drive:
# Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- -------- ------ ----------------- DISKVOL1 800 ms DISKVOL1 on /dev/dsk/c6t0d1s7 801 md DISKVOL1 on /dev/dsk/c4t0d2s7 802 md DISKVOL1 on /dev/scsi/changer/c1t0d5 900 rb library1 on ...catalog/library1cat/dev/rmt/1cbn
901
tp
library1
on
/dev/rmt/0cbn
902
tp
library1
on
/dev/rmt/2cbn
903
tp
library1
on
/dev/rmt/3cbn
904
tp
library1
on
Finally, if you wish to configure a SAM-QFS historian yourself, add an entry using the equipment type hy
. Enter a hyphen in the family-set and device-state columns and enter the path to the historian's catalog in additional-parameters column.
The historian is a virtual library that catalogs volumes that have been exported from the archive. If you do not configure a historian, the software creates one automatically using the highest specified equipment ordinal number plus one.
Note that the example abbreviates the long path to the historian catalog for page-layout reasons. The full path is /var/opt/SUNWsamfs/catalog/historian_cat
:
# Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ----------------- DISKVOL1 800 ms DISKVOL1 on /dev/dsk/c6t0d1s7 801 md DISKVOL1 on /dev/dsk/c4t0d2s7 802 md DISKVOL1 on /dev/scsi/changer/c1t0d5 900 rb library1 on ...catalog/SL150cat /dev/rmt/0cbn 901 tp library1 on /dev/rmt/1cbn 902 tp library1 on /dev/rmt/2cbn 903 tp library1 on /dev/rmt/3cbn 904 tp library1 onhistorian
999
hy
-
-
...catalog/historian_cat
Save the mcf
file, and close the editor.
...
/dev/rmt/3cbn 904 tp library1 on
historian 999 hy - - ...catalog/historian_cat
:wq
root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command. Correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
root@solaris:~# sam-fsd
If you are using one or more file systems as archival storage volumes, create the /etc/opt/SUNWsamfs/diskvols.conf
file in a text editor, and assign a volume serial number (VSN) to each file system. For each file system, start a new line consisting of the desired volume serial number, white space, and the path to the file-system mount point. Then save the file.
In the example, we have three disk-based archival volumes: DISKVOL1
is the QFS file system that we created locally for this purpose. DISKVOL2
to DISKVOL15
are UFS file systems. All are mounted on the /diskvols/
directory:
root@solaris:~#vi /etc/opt/SUNWsamfs/diskvols.conf
# Volume # Serial Resource # Number Path # ------ ---------------------DISKVOL1 /diskvols/DISKVOL1
DISKVOL2 /diskvols/DISKVOL2
...DISKVOL15 /diskvols/DISKVOL3
Create a mount-point directory for the new file system, and set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /samms
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
root@solaris:~# mkdir/samms
root@solaris:~#chmod 755 /samms
Tell the SAM-QFS software to reread the mcf
file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary
root@solaris:~# /opt/SUNWsamfs/sbin/samd config
Log in to the file system host as root
. Log in to the global zone if the host is configured with zones.
Back up the Solaris /etc/vfstab
file, and open it in a text editor.
In the example, we use the vi
editor.
root@solaris:~#cp /etc/vfstab /etc/vfstab.backup
root@solaris:~#vi /etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... samms - /samms samfs - yes -
Set the high-water mark, the percentage disk cache utilization that causes SAM-QFS to release previously archived files from disk. In the last column of the SAM-QFS file-system entry, enter the mount option high=
percentage
, where percentage
is a number in the range [0-100
].
Set this value based on disk storage capacity, average file size, and an estimate of the number of files that are accessed at any given time. You want to make sure that there is always enough cache space for both new files that users create and archived files that users need to access. But you also want to do as little staging as possible, so that you can avoid the overhead associated with mounting removable media volumes.
If the primary cache is implemented using the latest high-speed disk or solid-state devices, set the high-water mark value at 95%. Otherwise use 80-85%. In the example, we set the high-water mark to 85%:
root@solaris:~#cp /etc/vfstab /etc/vfstab.backup
root@solaris:~#vi /etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... samms - /samms samfs - yeshigh=85
Set the low-water mark, the percentage disk cache utilization that causes SAM-QFS to stop releasing previously archived files from disk. In the last column of the SAM-QFS file-system entry, enter the mount option low=
percentage
, where percentage
is a number in the range [0-100
].
Set this value based on disk storage capacity, average file size, and an estimate of the number of files that are accessed at any given time. For performance reasons, you want to keep as many recently active files in cache as you can, particularly when files are frequently requested and modified. This keeps staging-related overhead to a minimum. But you do not want previously cached files to consume space needed for new files and newly accessed files that have to be staged to disk from archival copies.
If the primary cache is implemented using the latest high-speed disk or solid-state devices, set the low-water mark value at 90%. Otherwise use 70-75%. In the example, based on local requirements, we set the high-water mark to 75%:
root@solaris:~#cp /etc/vfstab /etc/vfstab.backup
root@solaris:~#vi /etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... samms - /samms samfs - yeshigh=85,low=75
If your users need to retain some file data in the disk cache when previously archived files are released from disk, enter partial releasing mount options in the last column of the SAM-QFS file-system entry.
Partial releasing lets SAM-QFS leave the first part of a designated file in the disk cache when it releases archived files to recover disk space. This approach gives applications immediate access to the data at the start of the file while the remainder stages from archival media, such as tape. The following mount options govern partial releasing:
maxpartial=
value
sets the maximum amount of file data that can remain in disk cache when a file is partially released to value
, where value
is a number of kilobytes in the range 0-2097152
(0
disables partial releasing). The default is 16
.
partial=
value
sets the default amount of file data that remains in disk cache after a a file is partially released to value
, where value
is a number of kilobytes in the range [0-
maxpartial
]. The default is 16
. But note that the retained portion of a file always uses a kilobytes equal to at least one Disk Allocation Unit (DAU).
partial_stage=
value
sets the minimum amount of file data that must be read before an entire partially released file is staged to value
, where value
is a number of kilobytes in the range [0-
maxpartial
]. The default is the value specified by -o partial
, if set, or 16
.
stage_n_window=
value
sets the maximum amount of data that can be read at any one time from a file that is read directly from tape media, without automatic staging. The specified value
is a number of kilobytes in the range [64-2048000
]. The default is 256
.
For more information on files that are read directly from tape media, see OPTIONS
section of the stage
man page under -n
.
In the example, we set maxpartial
to 128
and partial
to 64
, based on the characteristics of our application, and otherwise accept default values:
root@solaris:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ...samms
- /samms samfs - yes ...maxpartial=128,partial=64
If you need to exclude QFS file systems from archiving, add the nosam
mount option to the /etc/vfstab
entry for each.
In the example, the nosam
option is set for the DISKVOL1
file system, which is a disk archive. Here, the nosam
mount option makes sure that archival copies are not themselves archived:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------------ ------ ---- ------- --------- /devices - /devices devfs - no - ... samms - /samms samfs - yes ... ,partial=64DISKVOL1
-/diskvols/DISKVOL1 samfs
- yesnosam
server:/DISKVOL2 - /diskvols/DISKVOL2 nfs - yes ... server:/DISKVOL15 - /diskvols/DISKVOL15 nfs - yes
Save the /etc/vfstab
file, and close the editor.
...
server:/DISKVOL15 - /diskvols/DISKVOL15 nfs - yes
:wq
root@solaris:~#
Mount the SAM-QFS archiving file system
root@solaris:~# mount /samms
Once archiving file systems have been created and mounted, you can generally address all or most of your archiving requirements with little additional configuration. In most cases, you need do little more than create a text file, archiver.cmd
, that identifies the file systems, specifies the number of archive copies of each of your, and assigns media volumes to each copy.
While the SAM-QFS archiving process does have a number of tunable parameters, you should generally accept the default settings in the absence of well-defined, special requirements. The defaults have been carefully chosen to minimize the number of media mounts, maximize utilization of media, and optimize end-to-end archiving performance in the widest possible range of circumstances. So if you do need to make adjustments, be particularly careful about any changes that unnecessarily restrict the archiver's freedom to schedule work and select media. If you try to micromanaging storage operations, you can reduce performance and overall efficiency, sometimes drastically.
You should, however, enable archive logging in almost all situations. Archive logging is not enabled by default, because the log files can reach excessive sizes if not properly managed (management is covered in the StorageTek Storage Archive Manager and QFS Software Maintenance and Administration Guide). But, if a file system is ever damaged or lost, the archive log file lets you recover files that cannot otherwise be easily restored. When you Configure File System Protection and maintain them properly, the file-system metadata in a recovery point file lets you rapidly rebuild a file system from the data stored in archive copies. But a few files are inevitably archived before the file system is damaged or lost but after the last recovery point is generated. In this situation, the archival media holds valid copies, but, in the absence of file-system metadata, the copies cannot be automatically located. Since the file system's archive log records the volume serial numbers of the media that holds each archive copy and the position of the corresponding tar
file(s) within each volume, you can use tar
utilities to recover these files and fully restore the file system.
To create the archiver.cmd
file and configure the archiving process, proceed as follows:
Log in to the host as root
.
root@solaris:~#
Open a new /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor.
Each line in an archiver.cmd
consists of one or more fields separated by white space (leading white space is ignored).
In the example, we use the vi
editor to open the file and enter a comment:
root@solaris:~# vi /etc/opt/SUNWsamfs/archiver.cmd
# Configuration file for SAM-QFS archiving file systems
At the beginning of the archiver.cmd
file, enter any general archiving directives that you need.
General directives contain the equals (=
) character in the second field or have no additional fields. In most cases, you can use the default values instead of setting general directives (see the GENERAL DIRECTIVES SECTION
of the archiver.cmd
man page for details).
While we could leave this section empty, in the example, we have entered the default values for two general directives to illustrate their form:
The archivemeta = off
directive tells the archiving process that it should not archive metadata.
The examine = noscan
directive tells the archiving process to check for files that need archiving whenever the file system reports that files have changed (the default).
Older versions of SAM-QFS scanned the whole file system periodically. In general, you should not change this directive unless you must do so for compatibility with legacy SAM-QFS configurations.
# Configuration file for SAM-QFS archiving file systems #----------------------------------------------------------------------- # General Directivesarchivemeta = off
# defaultexamine = noscan
# default
Once you have entered all required general archiving directives, start assigning files to archive sets. On a new line, enter the assignment directive fs =
filesystem-name
, where filesystem-name
is the family set name for a file system defined in the /etc/opt/SUNWsamfs/mcf
file.
The assignment directive maps a set of files in the specified file system to a set of copies on archival media. A set of files can be as large as all file systems or as small as a few files. But, for best performance and efficiency, you should not over-specify. Do not create more archive sets than you need to, as this can cause excessive media mounts, needless repositioning of media, and poor overall media utilization. In most cases, assign one archive set per file system.
In the example, we start the archive-set assignment directive for the archiving file system samms
:
# Configuration file for SAM-QFS archiving file systems
#-----------------------------------------------------------------------
# General Directives
archivemeta = off # default
examine = noscan # default
#-----------------------------------------------------------------------
# Archive Set Assignments
fs = samms
# SAM-QFS Archiving File System
On the next line, enable archive logging. Enter the logfile =
path/filename
directive, where path/filename
specifies the location and file name.
As noted above, archives log data are essential for a complete recovery following loss of a file system. So configure SAM-QFS to write the archiver log to a non-SAM-QFS directory, such as /var/adm/
, and save copies regularly. While you can create a global archiver.log
that records archiver activity for all file systems together, configuring a log for each file system makes it easier to search the log during file recovery. So, in the example, we specify /var/adm/samms.archive.log
here, with the file-system assignment directives:
root@solaris:~# vi /etc/opt/SUNWsamfs/archiver.cmd
...
#-----------------------------------------------------------------------
# Archive Set Assignments
fs = samms # SAM-QFS Archiving File System
logfile = /var/adm/samms.archive.log
On the next line, assign files from this file system to archive sets. For each archive set that you need to create, enter the directive archiveset-name
starting-directory
expression
, where:
archiveset-name
is the name that you choose for new the archive set.
starting-directory
is the path to the directory where SAM-QFS starts to search for files (relative to the file-system mount point).
expression
is one of the Boolean expressions defined by the Solaris find
command.
You should keep archive set definitions as inclusive and simple as possible in most cases. But note that, when circumstances dictate, you can limit archive set membership by specifying additional, more restrictive qualifiers, such as user or group file ownership, file size, file date/time stamps, and file names (using regular expressions). See the archiver.cmd
man page for full information.
In the example, we put all files found in the samms
file system in a single archive set named all
. We specify the path using a dot (.
) to start the search in the mount point directory itself (/samms
).
...
#-----------------------------------------------------------------------
# Archive Set Assignments
fs = samms # SAM-QFS Archiving File System
logfile = /var/adm/samms.archive.log
all .
Next, add copy directives for the allsamms
archive set of the samms
file system. For each copy, start the line with one or more spaces, and enter the directive copy-number
-release
-norelease
archive-age
unarchive-age
, where:
copy-number
is an integer.
-release
and -norelease
are optional parameters that control how disk cache space is managed once copies have been made. On its own, -release
causes the disk space to be automatically released as soon as the corresponding copy is made. On its own, -norelease
prevents release of disk space until all
copies that have -norelease
set have been made and
the releaser process has been run. Together, -release
and -norelease
automatically release disk cache space as soon as all copies that have -norelease
set have been made.
archive-age
is the time that must elapse from time when the file was last modified before it is archived. Express time as any combination of integers and the identifiers s
(seconds), m
(minutes), h
(hours), d
(days), w
(weeks) and y
(years). The default is 4m
.
unarchive-age
is the time that must elapse from time when the file was last modified before it can be unarchived. The default is to never unarchive copies.
For full redundancy, always specify at least two copies of each archive set (the maximum is four). In the example, we specify three copies, each with -norelease
until the copy reaches an archive age of 15 minutes. Copy 1
will be made using the archival disk volumes, while copies 2
and 3
will be made to tape media:
... #----------------------------------------------------------------------- # Archive Set Assignments fs = samms # SAM-QFS Archiving File System logfile = /var/adm/samms.archive.log all .1 -norelease 15m
2 -norelease 15m
3 -norelease 15m
Define archive sets for any remaining file systems.
In the example, we have configured a QFS file system, DISKVOL1
, as archival media for the copy process. So we start an entry for fs = DISKVOL1
. But we do not want to be making archival copies of archival copies. So we do not specify a log file, and we use a special archive set called no_archive
that prevents archiving for the files in this file system:
... #----------------------------------------------------------------------- # Archive Set Assignments fs = samms # SAM-QFS Archiving File System logfile = /var/adm/samms.archive.log all . 1 -norelease 15m 2 -norelease 15m 3 -norelease 15mfs = DISKVOL1
# QFS File System (Archival Media)no_archive .
Next we enter the directives that govern how copies are created. On a new line, start the copy parameters section by entering the key word params
.
...
fs = DISKVOL1 # QFS File System (Archival Media)
no_archive .
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
If you need to set any common copy parameters that apply to all copies of all archive sets, enter a line of the form allsets
-
param
value
...
where allsets
is the special archive set that represents all configured archive sets and -
param
value
...
represents one or more parameter/value pairs separated by spaces.
For full descriptions of the parameters and their values, see the ARCHIVE SET COPY PARAMETERS SECTION
section of the archiver.cmd
man page.
The directive in example is optimal for most file systems. The special allsets
archive set insures that all archive sets are handled uniformly, for optimal performance and ease of management. The -sort path
parameter insures that the tape archive (tar
) files for all copies of all archive sets are sorted by path, so that files in the same directories remain together on the archive media. The -offline_copy stageahead
parameter can improve performance when archiving offline files:
...
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
allsets -sort path -offline_copy stageahead
If you need to set copy parameters for specific copies in all archive sets, enter a line of the form allsets.
copy-number
-
param
value
...
where allsets
is the special archive set that represents all configured archive sets, copy-number
is the number of the copy to which the directive applies, and -
param
value
...
represents one or more parameter/value pairs separated by spaces.
For full descriptions of the parameters and their values, see the ARCHIVE SET COPY PARAMETERS SECTION
section of the archiver.cmd
man page).
In the example, the directive allsets.1 -startage 10m -startsize 500M -drives 10 -archmax 1G
, optimizes copy 1
for disk volumes. It starts archiving when the first file selected for archiving has been waiting for 10 minutes or the total size of all waiting files is at least 500 megabytes. A maximum of 10 drives can be used to make the copy and each tar
file in the copy can be no larger than one gigabyte. The remaining directives, allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G -reserve set
and allsets.3 -startage 48h -startsize 20G -drives 2 -archmax 24G -reserve set
, optimize copies 2
and 3
for tape media. They starts archiving when the first file selected for archiving has been waiting for 24 or 48 hours, respectively, or when the total size of all waiting files is at least 20 gigabytes. A maximum of 2 drives can be used to make these copies (adjust this number to suit your infrastructure) and each tar
file in the copy can be no larger than 24 gigabytes. The -reserve set
insures that copies 2
and 3
of each archive set are made using tape media that only contains copies from the same archive set:
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageaheadallsets.1 -startage 10m -startsize 500M -drives 10 -archmax 1G
allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G -reserve set
allsets.3 -startage 48h -startsize 20G -drives 2 -archmax 24G -reserve set
Note that the examples in this section assume the use of disk volumes for archiving. If you use only tape volumes, specify two copies and archive to tape more frequently. The following configuration is optimal for most file systems, once you adjust the specified number of drives to suit your infrastructure:
allsets -sort path -offline_copy stageahead -reserve set allsets.1 -startage 8h -startsize 8G -drives 2 -archmax 10G allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G
If you need to set a directive for a specific archive set and copy, enter a line of the form archive-set-name
.
copy-number
-
param
value
...
, where archive-set-name
is the name that you used for the archive set, copy-number
is the number of the copy to which the directive applies, and -
param
value
...
represents one or more parameter/value pairs separated by spaces.
For full descriptions of the parameters and their values, see the ARCHIVE SET COPY PARAMETERS SECTION
section of the archiver.cmd
man page).
In the example below, two archive sets are defined for the corpfs
file system: hq
and branches
. Note that the copy directives for hq.1
and hq.2
apply only to archive set hq
. Archive set branches
is unaffected:
#----------------------------------------------------------------------- # Archive Set Assignments fs = corpfs logfile = /var/adm/corporatefs.archive.loghq
/corpfs/hq/ 1 -norelease 15m 2 -norelease 15mbranches
/corpfs/branches/ 1 -norelease 15m 2 -norelease 15m #----------------------------------------------------------------------- # Copy Parameter Directives paramshq.1 -drives 4
hq.2 -drives 2
When you have set all required copy parameters, close the copy parameters list by entering the endparams
keyword on a new line:
root@solaris:~# vi /etc/opt/SUNWsamfs/archiver.cmd
...
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
allsets -sort path -offline_copy stageahead
allsets.1 -startage 10m -startsize 500M -drives 10 -archmax 1G
allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G -reserve set
allsets.3 -startage 48h -startsize 20G -drives 2 -archmax 24G -reserve set
endparams
Optionally, you can define media pools by entering the vsnpools
keyword, one or more directives of the form pool-name
media-type
volumes
, where pool-name
is the name that you have assigned to the pool, media-type
is one of the media type codes defined in Appendix A, "Glossary of Equipment Types", and volumes
is a regular expression that matches one or more volume serial numbers (VSNs). Close the directives list with the endvsnpools
keyword.
Media pools are optional, and you do not generally want to restrict the media available to the archiving process. So in these examples, we do not define media pools. For more information, see the VSN POOL DEFINITIONS SECTION
of the archiver.cmd
man page.
Next, start identifying the archival media that your archive set copies should use. On a new line, enter the keyword vsns
:
...
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
allsets -sort path -offline_copy stageahead
allsets.1 -startage 10m -startsize 500M -drives 10 -archmax 1G
allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G -reserve set
allsets.3 -startage 48h -startsize 20G -drives 2 -archmax 24G -reserve set
endparams
#-----------------------------------------------------------------------
# VSN Directives
vsns
Specify media for each archive-set copy by entering a line of the form archive-set-name
.
copy-number
media-type
volumes
, where archive-set-name
.
copy-number
specifies the archive set and copy to which the directive applies, media-type
is one of the media type codes defined in Appendix A, "Glossary of Equipment Types", and volumes
is a regular expression that matches one or more volume serial numbers (VSNs).
For full redundancy, always assign each archive set copy to a different range of media, so that both copies never reside on the same physical volume. If possible, always assign at least one copy to removable media, such as tape.
In the example, we send the first copy of every archive set to archival disk media (type dk
) with the volume serial numbers in the range DISKVOL1
to DISKVOL15
. We send the second copy of every archive set to tape media (type tp
) with volume serial numbers in the range VOL000
to VOL199
and the third copy to tape media (type tp
) with volume serial numbers in the range VOL200
to VOL399
:
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead allsets.1 -startage 10m -startsize 500M -drives 10 -archmax 1G allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G -reserve set allsets.3 -startage 48h -startsize 20G -drives 2 -archmax 24G -reserve set endparams #----------------------------------------------------------------------- # VSN Directives vsnsallsets.1 dk DISKVOL[1-15]
allsets.2 tp VOL[0-1][0-9][0-9]
allsets.2 tp VOL[2-3][0-9][0-9]
When you have specified media for all archive-set copies, close the vsns
directives list by entering the endvsns
keyword on a new line. Save the file and close the editor.
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead allsets.1 -startage 10m -startsize 500M -drives 10 -archmax 1G allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G -reserve set allsets.3 -startage 48h -startsize 20G -drives 2 -archmax 24G -reserve set endparams #----------------------------------------------------------------------- # VSN Directives vsns allsets.1 dk DISKVOL[1-15] allsets.2 tp VOL[0-1][0-9][0-9] allsets.2 tp VOL[2-3][0-9][0-9]endvsns
:wq
root@solaris:~#
Check the archiver.cmd
file for errors. Use the command archiver -lv
.
The archiver -lv
command prints the archiver.cmd
file to screen and generates a configuration report if no errors are found. Otherwise, it notes any errors and stops. In the example, we have an error:
root@solaris:~#archiver -lv
Reading '/etc/opt/SUNWsamfs/archiver.cmd'. ... 13: # File System Directives 14: # 15: fs = samms 16: logfile = /var/adm/samms.archive.log 17: all . 18: 1 -norelease 15m 19: 2 -norelease 15m 20:fs=DISKVOL1
# QFS File System (Archival Media) 21: ... 42: endvsnsDISKVOL1.1 has no volumes defined
1 archive set has no volumes defined
root@solaris:~#
If errors were found in the archiver.cmd
file, correct them, and then re-check the file.
In the example above, we forgot to enter the no_archive
directive to the file-system directives DISKVOL1
, the QFS file system that we configured as a disk archive. When we correct the omission, archiver -lv
runs without errors:
root@solaris:~#archiver -lv
Reading '/etc/opt/SUNWsamfs/archiver.cmd'. ... 20: fs=DISKVOL1 # QFS File System (Archival Media)21: no_archive .
... 42: endvsns Notify file: /etc/opt/SUNWsamfs/scripts/archiver.sh ... allsets.1 startage: 10m startsize: 500M drives 10: archmax: 1G Volumes: DISKVOL1 (/diskvols/DISKVOL15) ... DISKVOL15 (/diskvols/DISKVOL3) Total space available: 150T allsets.2 startage: 24h startsize: 20G drives: 2 archmax: 24G reserve: set Volumes: VOL000 ... VOL199 Total space available: 300T allsets.3 startage: 48h startsize: 20G drives: 2 archmax: 24G reserve: set Volumes: VOL200 ... VOL399 Total space available: 300T root@solaris:~#
Tell the SAM-QFS software to reread the archiver.cmd
file and reconfigure itself accordingly. Use the samd config
command.
root@solaris:~# /opt/SUNWsamfs/sbin/samd config
Open the /etc/opt/SUNWsamfs/releaser.cmd
file in a text editor, add the line list_size = 300000
, save the file, and close the editor.
The list_size
directive sets the number of files that can be released from a file system at one time to an integer in the range [10-2147483648
]. If there is enough space in the .inodes
file for one million inodes (allowing 512- bytes per inode), the default value is 100000
. Otherwise the default is 30000
. Increasing this number to 300000
better suits typical file systems that contain significant numbers of small files.
In the example, we use the vi
editor:
root@solaris:~#vi /etc/opt/SUNWsamfs/releaser.cmd
# releaser.cmd logfile = /var/opt/SUNWsamfs/releaser.loglist_size = 300000
:wq
root@solaris:~#
Open the /etc/opt/SUNWsamfs/stager.cmd
file in a text editor, and add the line maxactive =
stage-requests
, where stage-requests
is 500000
on hosts that have 8 gigabytes of RAM or more and 100000
on hosts that have less than 8 gigabytes. Save the file, and close the editor.
The maxactive
directive sets the maximum number of stage requests that can be active at one time to an integer in the range [1-500000
]. The default is based is to allow 5000 stage requests per gigabyte of host memory.
In the example, we use the vi
editor:
root@solaris:~#vi /etc/opt/SUNWsamfs/stager.cmd
# stager.cmd logfile = /var/opt/SUNWsamfs/stager.logmaxactive
=
300000
:wq
root@solaris:~#
Recycling is not enabled by default. So, if you require recycling of removable media volumes, go to "Configuring the Recycling Process".
If the mcf
file for the archiving SAM-QFS file system includes a network-attached tape library in the archiving equipment section, go to "Catalog Archival Media Stored in a Network-Attached Tape Library".
If you need to be able to verify the data integrity of archival tape volumes, go to "Configure Archival Media Validation".
Otherwise, "Configure File System Protection".
When removable media volumes contain fewer than a user-specified number of valid archive sets, the recycler consolidates the valid data on other volumes so that the original volumes can be exported for long-term storage or relabeled for reuse. You can configure recycling in either of two ways:
Configure Recycling by Archive Set
When you recycle media by archive set, you add recycling directives to the archiver.cmd
file and can specify exactly how media in each archive set copy is recycled. Recycling criteria are more narrowly applied, since only members of the archive set are considered.
Where possible, recycle media by archive sets rather than by libraries. In a SAM-QFS archiving file system, recycling is logically part of file-system operation rather than library management. Recycling complements archiving, releasing, and staging. So it makes sense to configure it as part of the archiving process. Note that you must configure recycling by archive sets if your configuration includes archival disk volumes and/or SAM-Remote.
Configure Recycling by Library
When you recycle media by library, you add recycling directives to a recycler.cmd
file and can set common recycling parameters for all media contained in a specified library. Recycling directives apply to all volumes in the library, so they are inherently less granular than archive set-specific directives. You can explicitly exclude specified volume serial numbers (VSNs) from examination. But otherwise, the recycling process simply looks for volumes that contain anything that it does not recognize as a currently valid archive file.
As a result, recycling by library can destroy files that are not part of the file system that is being recycled. If a recycling directive does not explicitly exclude them, useful data, such as back up copies of archive logs and library catalogs or archival media from other file systems, may be at risk. For this reason, you cannot recycle by library if you are using SAM-Remote. Volumes in a library controlled by a SAM-Remote server contain foreign archive files that are owned by clients rather than by the server.
Log in to the SAM-QFS file-system host as root
.
Open the /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor.
In the example, we use the vi
editor.
root@solaris:~#vi
/etc/opt/SUNWsamfs/archiver.cmd
In the /etc/opt/SUNWsamfs/archiver.cmd
file, scroll down to the copy params
section.
...
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
allsets -sort path -offline_copy stageahead
allsets.1 -startage 6h -startsize 6G -startcount 500000
allsets.2 -startage 24h -startsize 20G -startcount 500000 -drives 5
In the params
section of the archiver.cmd
file, enter your recycler directives by archive set, in the form archive-set
directive-list
, where archive-set is one of the archive sets and directive-list
is a space-delimited list of directive name/value pairs (for a list of recycling directives, see the archiver.cmd
man page). Then save the file and close the editor.
In the example, we add recycling directives for archive sets allsets.1
and allsets.2
. The -recycle_mingain 30
and -recycle_mingain 90
directives do not recycle volumes unless, respectively, at least 30 percent and 90 percent of the volume's capacity can be recovered. The -recycle_hwm 60
directive starts recycling when 60 percent of the removable media capacity has been used.
root@solaris:~# vi /etc/opt/SUNWsamfs/archiver.cmd ... #----------------------------------------------------------------------- # Copy Parameters Directives params allsets -sort path -offline_copy stageahead allsets.1 -startage 6h -startsize 6G -startcount 500000allsets.1
-recycle_mingain 30
-recycle_hwm 60
allsets.2 -startage 6h -startsize 6G -startcount 500000allsets.2
-recycle_mingain 90
-recycle_hwm 60
endparams #----------------------------------------------------------------------- # VSN Directives vsns all.1 dk DISKVOL1 all.2 tp VOL0[0-1][0-9] endvsns:wq
[root@solaris:~#
Check the archiver.cmd
file for errors. Use the command archiver -lv
.
The command archiver -lv
reads the archiver.cmd
and generates a configuration report if no errors are found. Otherwise, it notes any errors and stops.
If errors were found in the archiver.cmd
file, correct them, and then re-check the file.
Create the recycler.cmd
file in a text editor. Specify a path and file name for the recycler log. Then save the file and close the editor.
Configure SAM-QFS to write logs to a non-SAM-QFS directory, such as /var/adm/
. In the example, we use the vi
editor, and specify /var/adm/recycler.log
:
root@solaris:~#vi
/etc/opt/SUNWsamfs/recycler.cmd
logfile = /var/adm/recycler.log
:wq
root@solaris:~#
Open the /etc/opt/SUNWsamfs/scripts/recycler.sh
script in a text editor, and enter shell commands for handling recycled removable media volumes.
When the recycling process identifies a removable media volume that has been drained of valid archive copies, it calls the recycler.sh
file, a C-shell script designed to handle disposition of recycled media. You edit the file to perform the tasks that you need, from notifying administrators that volumes are ready for recycling to relabeling the volumes for reuse or exporting them from the library for long-term, historical preservation. By default, the script reminds the root
user to set up the script.
If the mcf
file for the archiving SAM-QFS file system includes a network-attached tape library in the archiving equipment section, go to "Catalog Archival Media Stored in a Network-Attached Tape Library".
Otherwise, go to "Configure File System Protection".
Log in to the SAM-QFS file-system host as root
.
Create the /etc/opt/SUNWsamfs/recycler.cmd
file in a text editor.
In the example, we use the vi
editor.
root@solaris:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for SAM-QFS archiving file systems
#-----------------------------------------------------------------------
Specify a path and file name for the recycler log using the logfile
directive.
Configure SAM-QFS to write logs to a non-SAM-QFS directory, such as /var/adm/
. In the example, we specify /var/adm/recycler.log
:
root@solaris:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for SAM-QFS archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
If there are any volumes in the archival media library that must not be recycled, enter the directive no_recycle
media-type
volumes
, where media-type
is one of the media type codes defined in Appendix A, "Glossary of Equipment Types", and volumes
is a regular expression that matches one or more volume serial numbers (VSNs).
In the example, we disable recycling for volumes in the range [VOL020-VOL999
]:
root@solaris:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for SAM-QFS archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
no_recycle tp VOL[0-9][2-9][0-9]
On a new line, enter the directive library
parameters
, where library
is the family set name that the /etc/opt/SUNWsamfs/mcf
file assigns to a removable media library and where parameters
is a space-delimited list of parameter/value pairs drawn from the following list:
-dataquantity
size
sets the maximum amount of data that can be scheduled for rearchiving at one time to size
, where size
is a number of bytes. The default is 1 gigabyte.
-hwm
percent
sets the library's high-water mark
, the percentage of the total media capacity that, when used, triggers recycling. The high-water mark is specified as percent
, a number in the range [0-100
]. The default is 95
.
-ignore
prevents recycling for this library, so that you can test the recycler.cmd
file non-destructively.
-mail
address
sends recycling messages to address
, where address
is a valid email address. By default, no messages are sent.
-mingain
percent
limits recycling to volumes that can increase their available free space by at least a minimum amount, expressed as a percentage of total capacity. This minimum gain is specified as percent
, a number in the range [0-100
]. The defaults are 60
for volumes with a total capacity under 200 gigabytes and 90
for capacities of 200 gigabytes or more.
-vsncount
count
sets the maximum number of volumes that can be scheduled for rearchiving at one time to count
. The default is 1
.
In the example, we set the high-water mark for library library1
to 95% and require a minimum capacity gain per cartridge of 60%:
root@solaris:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for SAM-QFS archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
no_recycle tp VOL[0-9][2-9][0-9]
library1 -hwm 95 -mingain 60
Repeat the preceding step for any other libraries that are part of the SAM-QFS configuration. Then save the recycler.cmd
file, and close the editor.
root@solaris:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for SAM-QFS archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
no_recycle tp VOL[0-9][2-9][0-9]
library1 -hwm 95 -mingain 60
:wq
root@solaris:~#
If the mcf
file for the archiving SAM-QFS file system includes a network-attached tape library in the archiving equipment section, go to "Catalog Archival Media Stored in a Network-Attached Tape Library"
Otherwise, go to "Configure File System Protection".
After you mount a file system, the SAM-QFS software creates catalogs for each automated library that is configured in the mcf
file. However, if you have a network-attached automated library, you must populate the library's catalog.
The appropriate method to use to populate a library's catalog depends on the number of volumes that you include in the catalog.
To catalog a large number of volumes stored in one of these network-attache libraries, you run the SAM-QFS build_cat
command against an input file that lists the volumes. Each entry in the file represents one volume using four, space-delimited fields:
Proceed as follows:
Log in to the file-system host as root
.
root@solaris:~#
If the archiving file system uses an Oracle StorageTek ACSLS-attached tape library, draw the required SAM-QFS archival media from the library's scratch pool and generate the catalog automatically. Use the samimport -c
volumes
-s
pool
command, volumes
is the number of volumes needed and pool
is the name of the scratch media pool defined for the library. Stop here.
In the example, we request 20
tape volumes drawn from the pool called scratch
:
root@solaris:~# samimport -c 20 -s scratch
If the archiving file system uses an IBM 3494 network-attached library configured as a single, unshared logical library, place the required tape volumes in the library mail slot, and let the library catalog them automatically. Stop here.
The IBM 3494 library is configured as a single logical library when the Additional Parameters
field of the mcf
file specifies access=private
. If access=shared
, the IBM 3494 library is divided into multiple logical libraries, and you must use the method specified below.
Otherwise, if the archiving file system uses a shared IBM 3494 network-attached library or any other network-attached library, create a catalog input file using a text editor.
In the example, we use the vi
editor to create the file inputsl8500cat
:
root@solaris:~# vi inputsl8500cat
~
"~/inputsl8500cat" [New File]
Start a record by entering the record index
. Always enter 0
(zero) for the first record, then increment the index for each succeeding record. Enter a space to indicate the end of the field.
Rows define records and spaces delimit fields in build_cat
input files. The value of the first field, the index
, is simply a consecutive integer starting from 0
that identifies the record within the SAM-QFS catalog. In the example, this is the first record, so we enter 0
:
0
~
"~/inputsl8500cat" [New File]
In the second field of the record, enter the volume serial number (VSN) of the tape volume or, if there is no VSN, a single ?
(question mark). Then enter a space to indicate the end of the field.
Enclose values that contain white-space characters (if any) in double quotation marks: "VOL 01"
. In this example, the VSN of the first volume does not contain spaces:
0 VOL001
~
"~/sl8500catinput" [New File]
In the third field, enter the barcode of the volume (if different from the volume serial number), the volume serial number, or, if there is no volume serial number, the string NO_BAR_CODE
. Then enter a space to indicate the end of the field.
In the example, the barcode of the first volume has the same value as the VSN:
0 VOL001 VOL001
~
"~/sl8500catinput" [New File]
Finally, in the fourth field, enter the media type of the volume. Then enter a space to indicate the end of the field.
The media type is a two-letter code, such as ti
(for StorageTek T10000 media) or li
(for LTO media). See Appendix A, "Glossary of Equipment Types", for a comprehensive listing of media equipment types. In the example, we are using an Oracle StorageTek SL8500 ACSLS-attached tape library with StorageTek T10000 tape drives, so we enter ti
:
0 VOL001 VOL001 ti
~
"~/sl8500catinput" [New File]
Repeat steps 3-6 to create additional records for each of the volumes that you intend to use with SAM-QFS. Then save the file.
0 VOL001 VOL001 ti1 VOL002 VOL002 ti
...13 VOL014 VOL014 ti
:wq
root@solaris:~#
Create the catalog with the build_cat
input-file
catalog-file
command, where input-file
is the name of your input file and catalog-file
is the full path to the library catalog.
If you have specified a catalog name in the Additional Parameters
field of the mcf
file, use that name. Otherwise, if you do not create catalogs, the SAM-QFS software creates default catalogs in the /var/opt/SUNWsamfs/catalog/
directory using the file namefamily-set-name
, where family-set-name
is equipment name that you use for the library in mcf
file. In the example, we use the family set SL8500
:
root@solaris:~# build_cat input_vsns /var/opt/SUNWsamfs/catalog/SL8500
If the archiving file system is shared, repeat the preceding step on each potential metadata server.
The archiving file system is now complete and ready for use.
To protect a file system, you need to do two things:
You must protect the files that hold your data.
You must protect the file system itself, so that you can use, organize, locate, access, and manage your data.
In a SAM-QFS archiving file system, file data is automatically protected by the archiver: modified files are automatically copied to archival storage media, such as tape. But if you backed up only your files and then suffered an unrecoverable failure in a disk device or RAID group, you would have the data but no easy way to use it. You would have to create a substitute file system, identify each file, determine its proper location within the new file system, ingest it, and recreate lost relationships between it and users, applications, and other files. This kind of recovery is, at best, a daunting and long drawn-out process.
So, for fast, efficient recovery, you have to actively protect the file-system metadata that make files and archives copies usable. You must back up directory paths, inodes, access controls, symbolic links, and pointers to copies archived on removable media.
You protect SAM-QFS file-system metadata by scheduling recovery points and saving archive logs. A recovery point is a compressed file that stores a point-in-time backup copy of the metadata for a SAM-QFS file system. In the event of a data loss—anything from accidental deletion of a user file to catastrophic loss of a whole file system—you can recover to the last known-good state of the file or file system almost immediately by locating the last recovery point at which the file or file system remained intact. You then restore the metadata recorded at that time and either stage the files indicated in the metadata to the disk cache from archival media or, preferably, let the file system stage files on demand, as users and applications access them.
Like any point-in-time backup copy, a recovery point is seldom a complete record of the state of the file system at the time when a failure occurs. Inevitably, at least a few files are created and changed after one recovery point is completed and before the next one is created. You can—and should—minimize this problem by scheduling creation of recovery points frequently and at times when the file system is not in use. But, in practice, scheduling has to be a compromise, because the file system exists to be used.
For this reason, you must also save point-in-time copies of the archiver log file. As each data file is archived, the log file records the volume serial number of the archival media, the archive set and copy number, the position of the archive (tar
) file on the media, and the path to and name of the data file within the tar
file. With this information, you can recover any files that are missing from the recovery point using Solaris or SAM-QFS tar
utilities. However, this information is volatile. Like most system logs, the archiver log grows rapidly and must thus be overwritten frequently. If you do not make regular copies to compliment your recovery points, you will not have log information when you need it.
File system protection thus requires some planning. On the one hand, you want to create recovery points and log-file copies frequently enough and retain them long enough to give you the best chance of recovering lost or damaged files and file systems. On the other hand, you do not want to create recovery points and log-file copies while data files are actively changing and you need to be cognizant of the disk space that they consume (recovery point files and logs can be large). Accordingly, this section recommends a broadly applicable configuration that can be used with many file system configurations without modification. When changes are necessary, the recommended configuration illustrates the issues and serves as a good starting point. The remainder of this section provides instructions for creating and managing recovery points. It contains the following subsections:
For each archiving file system that you have configured, proceed as follows:
Log in to the file-system host as root
.
root@solaris:~#
Select a storage location for the recovery point files. Select an independent file system that can be mounted on the file system host.
Make sure that the selected file system has enough space to store both new recovery point files and the number of recovery point files that you plan to retain at any given time.
Recovery point files can be large and you will have to store a number of them, depending on how often you create them and how long you retain them.
Make sure that the selected file system does not share any physical devices with the archiving file system.
Do not store recovery point files in the file system that they are meant to protect. Do not store recovery point files on logical devices, such as partitions or LUNs, that reside on physical devices that also host the archiving file-system.
In the selected file system, create a directory to hold recovery point files. Use the command mkdir
mount-point
/
path
, where mount-point
is the mount point for the selected independent file system and path
is the path and name of the chosen directory.
Do not store recovery point files for several archiving file systems in a single, catch-all directory. Create a separate directory for each, so that recovery point files are organized and easily located when needed.
In the example, we are configuring recovery points for the archiving file system /samms
. So we have created the directory /zfs1/samms_recovery
on the independent file system /zfs1
:
root@solaris:~# mkdir /zfs1/samms_recovery
If a file system does not share any physical devices with the archiving file system, create a subdirectory for storing point-in-time copies of the archiver log(s) for your file system(s).
In the example, we choose to store log copies in the /var
directory of the host's root file system. We are configuring file system protection for the archiving file system /samms
. So we create the directory /var/samms_archlogs
:
root@solaris:~# mkdir /var/samms_archlogs
Next, Automatically Create Recovery Points and Save Archiver Logs.
While you can create metadata recovery point files automatically either by creating entries in the crontab
file or by using the scheduling feature of the SAM-QFS File System Manager graphical user interface, the latter method does not automatically save archiver log data. So this section focuses on the crontab
approach. If you wish to use the graphical user interface to schedule recovery points, refer to the StorageTek Storage Archive Manager and StorageTek QFS Software File System Manager User's Guide.
The procedure below creates two crontab
entries that run daily: one that deletes out-of-date recovery point files and then creates a new recovery point and one that saves the archiving log. For each archiving file system that you have configured, proceed as follows:
Log in to the file-system host as root
.
root@solaris:~#
Open the root
user's crontab
file for editing. Use the command crontab -e
.
The crontab
command opens an editable copy of the root
user's crontab
file in the text editor specified by the EDITOR
environment variable (for full details, see the Solaris crontab
man page). In the example, we use the vi
editor:
root@solaris:~# crontab -e
...
# The root crontab should be used to perform accounting data collection.
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
On a new line, specify the time of day when the work will be done by entering minutes
hour
* * *
, where:
minutes
is an integer in the range [0-59
] that species the minute when the job starts.
hour
is an integer in the range [0-23
] that species the hour when the job starts.
*
(asterisk) specifies unused values.
For a task that runs daily, the values for day of the month [1-31
], month [1-12
], and day of the week [0-6
] are unused.
Spaces separate the fields in the time specification.
hour
minutes
specify a time when files are not being created or modified.
Creating a recovery point file when file-system activity is minimal insures that the file reflects the state of the archive as accurately and completely as possible. Ideally, all new and altered files will have been archived before the time you specify.
In the example, we schedule work to begin at 2:10 AM every day.
...
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * *
Continuing on the same line, enter the shell commands that clean up the old recovery point files. Enter the text ( find
directory
-type f
-mtime +
retention
-print | xargs -l1
rm -f;
, where:
(
(the opening parenthesis) marks the start of the command sequence that the crontab
entry will execute.
directory
is the path and directory name of the directory where recovery point files are stored and thus the point where we want the Solaris find
command to start its search.
-type f
is the find
command option that specifies plain files (as opposed to block special files, character special files, directories, pipes, etc).
-mtime +
retention
is the find
command option that specifies files that not been modified for more than retention
, an integer representing the number of hours that recovery point files are retained.
-print
is the find
command option that lists all files found to standard output.
|xargs -l1 rm -f
pipes the output from -print
to the Solaris command xargs -l1
, which sends one line at a time as arguments to the Solaris command rm -f
, which in turn deletes each file found.
;
(semicolon) marks the end of the command line.
In the example, the crontab
entry searches the directory /zfs1/sam1_recovery
for any files that have not been modified for 72 hours (3 days) or more and deletes any it finds:
# The root crontab should be used to perform accounting data collection. 10 3 * * * /usr/sbin/logadm 15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind 30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean 30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh10 2 * * * ( find /zfs1/sam1_recovery -type f -mtime +72 -print |
\xargs -l1 rm -f;
Continuing on the same line, enter the shell command that changes to the directory where the recovery point is to be created. Enter the text cd
mount-point
;
, where mount-point
is the root directory of the archiving file system and ;
(semicolon) marks the end of the command line.
The command that creates recovery point files, samfsdump
, backs up the metadata for all files in the current directory and in all subdirectories. In the example, we change to the /sam1
directory. Note that the crontab
entry is still a single line, even though the line appears to wrap to fit the format of this book (the apparent end of the line has been escaped by a back slash):
# The root crontab should be used to perform accounting data collection. 10 3 * * * /usr/sbin/logadm 15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind 30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean 30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh10 2 * * * ( find /zfs1/sam1_recovery -type f -mtime +72 -print |
\xargs -l1 rm -f;
cd /sam1;
Continuing on the same line, enter the shell commands that create the new daily recovery point. Enter the text /opt/SUNWsamfs/sbin/samfsdump
-f
directory
/'date +\%y\%m\%d')
, where:
/opt/SUNWsamfs/sbin/samfsdump
is the command that creates recovery points (see the man page for full details).
-f
directory
is the samfsdump
command option that specifies the location where the recovery point file will be saved and directory
is the directory that we created to hold recovery points for this file system.
'date +\%y\%m\%d'
uses the Solaris date
command with the formatting template +\%y\%m\%d
to create a name for the recovery point file: YYMMDD
, where YYMMDD
is the last two digits of the current year, the two-digit number of the current month, and the two-digit day of the month (for example, 140122
, January 22, 2014).
;
(semicolon) marks the end of the command line.
) (the closing parenthesis) marks the end of the command sequence that the crontab
entry will execute.
In the example, we specify the recovery-point directory that we created above, /zfs1/sam1_recovery
. Note that the crontab
entry is still a single line, even though the line appears to wrap to fit the format of this book (the apparent ends of lines have been escaped by back slashes):
# The root crontab should be used to perform accounting data collection. 10 3 * * * /usr/sbin/logadm 15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind 30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean 30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh10 2 * * * ( find /zfs1/sam1_recovery -type f -mtime +72 -print |
\xargs -l1 rm -f;
cd /sam1 ; /opt/SUNWsamfs/sbin/samfsdump
\-f /zfs1/sam1_recovery/'date +\%y\%m\%d')
On a new line, specify the time of day when the work will be done by entering minutes
hour
* * *
, where:
minutes
is an integer in the range [0-59
] that species the minute when the job starts.
hour
is an integer in the range [0-23
] that species the hour when the job starts.
*
(asterisk) specifies unused values.
For a task that runs daily, the values for day of the month [1-31
], month [1-12
], and day of the week [0-6
] are unused.
Spaces separate the fields in the time specification.
hour
minutes
specify a time when files are not being created or modified.
In the example, we schedule work to begin at 3:15 AM every Sunday:
...
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /zfs1/sam1_recovery -type f -mtime +72 -print | \
xargs -l1 rm -f; cd /sam1 ; /opt/SUNWsamfs/sbin/samfsdump \
-f /zfs1/sam1_recovery/'date +\%y\%m\%d')
15 3 * * 0
Continuing on the same line, enter a shell command that moves the current archiver log to a location and gives it a unique name. Enter the text ( mv /var/adm/sam1.archive.log /var/sam1_archlogs/"date +%y%m%d";
.
This step saves log entries that would be overwritten if left in the active log file.
...
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /sam1_recovery/dumps -type f -mtime +72 -print | xargs -l1 rm -f; \ cd /sam1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/sam1_recovery/'date +\%y\%m\%d')
15 3 * * 0 ( mv /var/adm/sam1.archive.log /var/sam1_archlogs/"date +%y%m%d";
Continuing on the same line, enter a shell command to reinitialize the archiver log file. Enter the text touch /var/adm/sam1.archive.log )
.
In the example, note that the crontab
entry is still a single line, even though the line appears to wrap to fit the format of this book (the apparent end of the line has been escaped by a back slash):
...
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /sam1_recovery/dumps -type f -mtime +72 -print | xargs -l1 rm -f; \ cd /sam1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/sam1_recovery/'date +\%y\%m\%d')
15 3 * * 0 ( mv /var/adm/sam1.archive.log /var/sam1_archlogs/"date +%y%m%d"; \
touch /var/adm/sam1.archive.log )
Save the file, and close the editor.
# The root crontab should be used to perform accounting data collection.
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /sam1_recovery/dumps -type f -mtime +72 -print | xargs -l1 rm -f; \ cd /sam1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/sam1_recovery/'date +\%y\%m\%d')
15 3 * * 0 ( mv /var/adm/sam1.archive.log /var/sam1_archlogs/"date +%y%m%d"; \ touch /var/adm/sam1.archive.log )
:wq
root@solaris:~#
Where to go from here:
If you need to be able to verify the data integrity of archival tape volumes, go to "Configure Archival Media Validation".
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)"
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Media validation is a technique that evaluates the data integrity of tape media using SCSI verify
commands. The SCSI driver on the host calculates a CRC checksum for the logical blocks of data that it writes to the drive and sends a verify
command. The drive reads the data blocks, calculates its own checksum, and compares the result with the value supplied by the driver, and returns an error if there is a discrepancy. The drive discards the data it reads as soon as the checksum is complete, so there is no additional I/O-related overhead on the host.
SAM-QFS supports media validation in two ways:
You can Configure SAM-QFS to Support Data Integrity Validation to validate data on StorageTek T10000 tape media, either manually or automatically under SAM-QFS Periodic Media Verification.
You can also Configure SAM-QFS Periodic Media Verification to automatically validate data on both StorageTek T10000 tape media and other formats, such as LTO Ultrium.
Data Integrity Validation is a feature of Oracle StorageTek tape drives that works with the SAM-QFS software to insure the integrity of stored data. When the feature is enabled (div = on
or div = verify
), both the server host and the drive calculate and compare checksums during I/O. During write operations, the server calculates a four-byte checksum for each data block and passes the checksum to the drive along with the data. The tape drive then recalculates the checksum and compares the result to the value supplied by the server. If the values agree, the drive writes both the data block and the checksum to tape. During read operations, both the drive and the host read a data block and its associated checksum from tape. Each recalculates the checksum from the data block and compares the result to the stored checksum. If checksums do not match at any point, the drive notifies the application software that an error has occurred.
The div = verify
option provides an additional layer of protection when writing data. When the write operation is complete, the host asks the tape drive to reverify the data. The drive then rescans the data, recalculates checksums, and compares the results to the checksums stored on the tape. The drive performs all operations internally, with no additional I/O (data is discarded), so there is no additional overhead on the host system. You can also use the SAM-QFS tpverify
(tape-verify) command to perform this step on demand.
To configure Data Integrity Validation, proceed as follows:
Log in to the SAM-QFS server as root
.
In the example, the metadata server is named samfs-mds
:
[samfs-mds]root@solaris:~#
Make sure that the metadata server is running Oracle Solaris 11 or higher.
[samfs-mds]root@solaris:~#uname -r
5.11
[samfs-mds]root@solaris:~#
Make sure that the archival storage equipment defined in the SAM-QFS mcf
file includes compatible tape drives: StorageTek T10000C (minimum firmware level 1.53.315) or T10000D.
If any SAM-QFS operations are currently running, stop them using the samd stop
command.
[samfs-mds]root@solaris:~# samd stop
Open the /etc/opt/SUNWsamfs/defaults.conf
file in a text editor. Uncomment the line #div = off
, if necessary, or add it if it is not present.
By default, div
(Data Integrity Validation) is off
(disabled).
In the example, we open the file in the vi
editor and uncomment the line:
[samfs-mds]root@solaris:~# vi/etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...div = off
To enable Data Integrity Validation read, write, and verify operations, change the line #div = off
to div = on
, and save the file.
Data will be verified as each block is written and read, but the SAM-QFS archiver software will not verify complete file copies after they are archived.
[samfs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...div = on
:wq
[samfs-mds]root@solaris:~#
To enable the verify-after-write option of the Data Integrity Validation feature, change the line div = off
to div = verify
, and save the file.
The host and the drive carry out Data Integrity Validation as each block is written or read. In addition, whenever a complete archive request is written out to tape, the drive rereads the newly stored data and checksums, recalculates, and, compares the stored and calculated results.
[samfs-mds]root@solaris:~# vi/etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...div = verify
#narcopy = true:wq
[samfs-mds]root@solaris:~#
Tell the SAM-QFS software to reread the defaults.conf
file and reconfigure itself accordingly. Use the samd config
command.
[samfs-mds]root@solaris:~# /opt/SUNWsamfs/sbin/samd config
If you stopped SAM-QFS operations in an earlier step, restart them now using the samd start
command.
[samfs-mds]root@solaris:~# samd start
Data Integrity Validation is now configured.
Where to go from here:
If you need to automate data integrity validation, go to "Configure SAM-QFS Periodic Media Verification".
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)"
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Starting with Release 5.4, you can set up Periodic Media Verification (PMV) for SAM-QFS archiving file systems. Periodic Media Verification automatically checks the data integrity of the removable media in a file system. It checks StorageTek T10000 media using StorageTek Data Integrity Validation and other drives using the widely supported SCSI verify(6)
command.
The Periodic Media Verification feature adds a SAM-QFS daemon, verifyd
, that periodically applies the tpverify
command, logs any errors detected, notifies administrators, and automatically performs specified recovery actions. You configure Periodic Media Verification by setting policy directives in a configuration file, verifyd.cmd
. Policies can specify the times when verification scans are run, the types of scan done, the libraries and drives that can be used, the tape volumes that should be scanned, and the actions that SAM-QFS takes when errors are detected. SAM-QFS can, for example, automatically rearchive files that contain errors and/or recycle tape volumes that contain errors.
If you have not already done so, Configure SAM-QFS to Support Data Integrity Validation.
Log in to the SAM-QFS server as root
.
In the example, the metadata server is named samfs-mds
:
[samfs-mds]root@solaris:~#
Make sure that the metadata server is running Oracle Solaris 11 or higher.
[samfs-mds]root@solaris:~#uname -r
5.11
[samfs-mds]root@solaris:~#
Open the /etc/opt/SUNWsamfs/verifyd.cmd
file in a text editor.
In the example, we open the file in the vi
editor:
[samfs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
To enable Periodic Media Verification, enter the line pmv = on
.
By default, Periodic Media Verification is off
. In the example, we set it on
:
# Enable SAM-QFS Periodic Media Validation (PMV)
pmv = on
Set a run time. Enter the line run_time =
always
to run verification continuously or run_time =
HH
MM
hhmm
DD
dd
, where HH
MM
and hhmm
are, respectively, starting and ending times and where DD
dd
are an optional starting and ending day.
HH
and hh
are hours of the day in the range 00-24
, MM
and mm
are numbers of minutes in the range 00-60
, and DD
and dd
are days of the week in the range [0-6]
, where 0
is Sunday and 6
is Saturday. The default is 2200 0500 6 0
.
But verification will not compete with more immediately important file system operations. The verification process automatically yields tape volumes and/or drives that are required by the archiver and stager. So, in the example, we set the run time to always
:
pmv = on
run_time = always
Specify a verification method. Enter the line pmv_method =
specified-method
where specified-method
is one of the following:
The standard
method is specifically for use with Oracle StorageTek T10000C and later tape drives. Optimized for speed, the standard
method verifies the edges, beginning, end, and first 1,000 blocks of the media.
The complete
method is also for use with Oracle StorageTek T10000C and later tape drives. It verifies the media error correction code (ECC) for every block on the media.
The complete plus
is also for use with Oracle StorageTek T10000C and later tape drives. It verifies both the media error correction code (ECC) and the Data Integrity Validation checksum for each block on media (see "Configure SAM-QFS to Support Data Integrity Validation").
The legacy
method can be used with all other tape drives and is used automatically when media is marked bad in the catalog and when drives do not support the method specified in the verifyd.cmd
file. It runs a 6-byte, fixed-block mode SCSI Verify Command, skipping previously logged defects. When a new permanent media error is found, the legacy
method skips to the next file and logs the newly discovered error in the media defects database.
The mir rebuild
method rebuilds the media information region (MIR) of an Oracle StorageTek tape cartridge if the MIR is missing or damaged. It works with media that is marked bad in the media catalog and is automatically specified when MIR damage is detected.
In the example, we are using LTO drives, so we specify legacy
:
pmv = on
run_time = always
pmv_method = legacy
To use all available libraries and drives for verification, enter the line pmv_scan = all
.
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = all
To use all available drives in a specified library for verification, enter the line pmv_scan = library
equipment-number
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file.
In the example, we let the verification process use all drives in library 800
.
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = library 800
To limit the number of drives that the verification process can use in a specified library, enter the line pmv_scan = library
equipment-number
max_drives
number
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file and number is the maximum number of drives that can be used.
In the example, we let the verification process use at most 2
drives in library 800
:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = library 800 max_drives 2
To specify the drives that the verification process can use in a specified library, enter the line pmv_scan = library
equipment-number
drive
drive-numbers
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file and drive-numbers
is a space-delimited list of the equipment numbers assigned to the specified drives in the mcf
file.
In the example, we let the verification process use drives 903
and 904
in library 900
:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = library 900 drive 903 904
To specify the drives that the verification process can use in two or more libraries, enter the line pmv_scan =
library-specification
library-specification
...
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file and drive-numbers
is a space-delimited list of the equipment numbers assigned to the specified in the mcf
file.
In the example, we let the verification process use at most 2
drives in library 800
and drives 903
and 904
in library 900
:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = library 800 max_drives 2 library 900 drive 903 904
To disable periodic media verification and prevent it from using any equipment, enter the line pmv_scan = off
.
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = off
To automatically flag the media for recycling once periodic media verification has detected a specified number of permanent errors, enter the line action = recycle perms
number-errors
, where number-errors
is the number of errors.
In the example, we configure SAM-QFS to flag the media for recycling after 10
errors have been detected:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = disable
action = recycle perms 10
To automatically re-archive files that contain bad blocks after errors have accumulated for a specified period, enter the line action = rearch age
time
, where time
is a space-delimited list of any combination of SECONDS
s
, MINUTES
m
, HOURS
h
, DAYS
d
, and/or YEARS
y
and SECONDS
, MINUTES
, HOURS
, DAYS
, and YEARS
are integers.
The oldest media defect must have aged for the specified period before the file system is scanned for files that need archiving. In the example, we set the re-archiving age to 1
(one) minute:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = disable
action = rearch age 1m
To mark the media as bad when periodic media verification detects a permanent media error and take no action otherwise, enter the line action =
none
.
pmv = on run_time = always pmv_method = legacy pmv_scan = disableaction =
none
Specify the tape volumes that should be verified periodically. Enter the line pmv_vsns =
selection-criterion
, where selection-criterion
is all
or a space-delimited list of regular expressions that specify one or more volume serial numbers (VSNs).
The default is all
. In the example, we supply three regular expressions: ^VOL0[01][0-9]
and ^VOL23[0-9]
specify two sets volumes with volume serial numbers in the ranges VOL000
to VOL019
and VOL230
to VOL239
, respectively, while VOL400
specifies the volume with that specific volume serial number:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = all
pmv_vsns = ^VOL0[01][0-9] ^VOL23[0-9] VOL400
SAM-QFS will not try to verify volumes if they need to be audited, if they are scheduled for recycling, if they are unavailable, if they are foreign (non-SAM-QFS) volumes, or if they do not contain data. Cleaning cartridges, volumes that are unlabeled, and volumes that have duplicate volume serial numbers, are also excluded.
SAM-QFS identifies and ranks candidates for verification based on the amount of time that has passed since the volume was last verified and, optionally, modified and/or mounted. So define the desired verification policy. Enter the line pmv_policy = verified age
vertime
with, optionally, the parameters modified age
modtime
, and/or mounted age
mnttime
, where vertime
, modtime
, and mnttime
are lists of value-unit pairs where the values are non-negative integers and the units:
vertime
, modtime
, and mnttime
are lists of value-unit pairs
The values are non-negative integers.
The units are y
(years), m
(months), d
(days), H
(hours), M
(minutes), and S
(seconds).
The verified age
parameter specifies the minimum time that must have passed since the volume was last verified. The modified age
parameter specifies the minimum time that must have passed since the volume was last modified. The mounted age
parameter specifies the minimum time that must have passed since the volume was last mounted. The default policy is the single parameter, verified age 6m (six months). In the example, we set the last-verified age to three months and the last-modified age to fifteen months:
pmv = on
run_time = always
pmv_method = legacy
pmv_scan = all
pmv_vsns = ^VOL0[01][0-9] ^VOL23[0-9] VOL400
pmv_policy = verified age 3m modified age 15m
Save the /etc/opt/SUNWsamfs/verifyd.cmd
file, and close the editor.
Check the verifyd.cmd
file for errors by running the sam-fsd
command. Correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
root@solaris:~# sam-fsd
You have finished configuring periodic media verification.
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Write-once read-many (WORM) files are used in many applications for legal and archival reasons. WORM-enabled SAM-QFS file systems support default and customizable file-retention periods, data and path immutability, and subdirectory inheritance of the WORM setting. You can use either of two WORM modes:
standard compliance mode (the default)
The standard WORM mode starts the WORM retention period when a user sets UNIX setuid
permission on a directory or non-executable file (chmod 4000
directory
|
file
). Since setting setuid
(set user ID upon execution
) permission on an executable file presents security risks, files that also have UNIX execute permission cannot be retained using this mode.
emulation mode
The WORM emulation mode starts the WORM retention period when a user makes a writable file or directory read-only (chmod 444
directory
|
file
), so executable files can be retained.
Both standard and emulation modes have both a strict WORM implementation and a less restrictive, lite implementation that relax some restrictions for root
users. Both strict and lite implementations do not allow changes to data or paths once retention has been triggered on a file or directory. The strict implementations do not let anyone shorten the specified retention period (by default, 43,200 minutes/30 days) or delete files or directories prior to the end of the retention period. They also do not let anyone use sammkfs
to delete volumes that hold currently retained files and directories. The strict implementations are thus well-suited to meeting legal and regulatory compliance requirements. The lite implementations let root
users shorten retention periods, delete files and directories, and delete volumes using the file-system creation command sammkfs
. The lite implementations may thus be better choices when both data integrity and flexible management are primary requirements.
Take care when selecting a WORM implementation and when enabling retention on a file. In general, use the least restrictive option that is consistent with requirements. You cannot change from standard to emulation modes or vice versa.So choose carefully. If management flexibility is a priority or if retention requirements may change at a later date, select a lite implementation. You can upgrade from the lite version of a WORM mode to the strict version, should it later prove necessary. But you cannot change from a strict implementation to a lite implementation. Once a strict WORM implementation is in effect, files must be retained for their full specified retention periods. So set retention to the shortest value consistent with requirements.
You enable WORM support on a file system using mount options. Proceed as follows.
Log in as root
.
root@solaris:~#
Back up the operating system's /etc/vfstab
file.
root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Open the /etc/vfstab
file in a text editor and locate the entry for the SAM-QFS file system for which you want to enable WORM support.
In the example, we open the /etc/vfstab
file in the vi
editor and locate the archiving file system worm1
:
root@solaris:~# vi /etc/vfstab
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes -
To enable the strict implementation of the standard WORM compliance mode, enter the worm_capable
option in the Mount Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes worm_capable
To enable the lite implementation of the standard WORM compliance mode, enter the worm_lite
option in the Mount Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes worm_lite
To enable the strict implementation of the WORM emulation mode, enter the worm_emul
option in the Mount Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes worm_emul
To enable the lite implementation of the WORM emulation mode, enter the emul_lite
option in the Mount Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes emul_lite
To change the default retention period for files that are not explicitly assigned a retention period, add the def_retention=
period
option to the Mount Options
column of the vfstab
file, where period
takes one of the forms explained in the following paragraph.
The value of period
can take any of three forms:
permanent
or 0
specifies permanent retention.
YEARS
y
DAYS
d
HOURS
h
MINUTES
m
where YEARS
, DAYS
, HOURS
, and MINUTES
are non-negative integers and where specifiers may be omitted. So, for example, 5y3d1h4m
, 2y12h
, and 365d
are all valid.
MINUTES
where MINUTES
is an integer in the range [1-2147483647]
.
Set a default retention period if you must set retention periods that extend beyond the year 2038. UNIX utilities such as touch
use signed, 32-bit integers to represent time as the number of seconds that have elapsed since January 1, 1970. The largest number of seconds that a 32-bit integer can represent translates to January 18, 2038 at 10:14 PM
If a value is not supplied, def_retention
defaults to 43200
minutes (30 days). In the example, we set the retention period for a standard WORM-capable file system to 777600
minutes (540 days):
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - no worm_capable,def_retention=777600
Save the vfstab
file, and close the editor.
The file system is WORM-enabled. Once one or more WORM files are resident in the file system, the SAM-QFS software will update the file system superblock to reflect the WORM capability. Any subsequent attempt to rebuild the file system with sammkfs
will fail if the file system has been mounted with the strict worm_capable
or worm_emul
mount option.
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)"
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Starting with Release 5.4, SAM-QFS can import data from and export data to Linear Tape File System (LTFS) volumes. This capability facilitates interworking with systems that use LTFS as their standard tape format. It also eases transfer of very large volumes of data between remote SAM-QFS sites, when typical wide-area network (WAN) connections are too slow or too expensive for the task.
For information on using and administering LTFS volumes, see the samltfs
man page and the StorageTek Storage Archive Manager and QFS Software Maintenance and Administration Guide.
To enable SAM-QFS LTFS support, proceed as follows:
Log in to the SAM-QFS metadata server as root
.
[samfs-mds]root@solaris:~#
If any SAM-QFS operations are currently running, stop them using the samd stop
command.
[samfs-mds]root@solaris:~# samd stop
Open the /etc/opt/SUNWsamfs/defaults.conf
in a text editor.
In the example, we open the file in the vi
editor:
[samfs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character from the beginning of the line)
# and change the value.
...
In the defaults.conf
file, add the line ltfs =
mountpoint
workers
volumes
, where mount point is the directory in the host file system where the LTFS file system should be mounted, workers
is an optional maximum number of drives to use for LTFS and volumes
is an optional maximum number of tape volumes per drive. Then save the file, and close the editor.
In the example, we specify the LTFS mount point s /mnt/ltfs
and accept the defaults for the other parameters:
[samfs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...ltfs = /mnt/ltfs
:wq
[samfs-mds]root@solaris:~#
Tell the SAM-QFS software to reread the defaults.conf
file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.
[samfs-mds]root@solaris:~# /opt/SUNWsamfs/sbin/samd config
If you stopped SAM-QFS operations in an earlier step, restart them now using the samd start
command.
[samfs-mds]root@solaris:~# samd start
SAM-QFS support for LTFS is now enabled. If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
This completes basic installation and configuration of SAM-QFS file systems. At this point, you have set up fully functional file systems that are optimally configured for a wide range of purposes.
The remaining chapters in this book address more specialized needs. So, before you embark on the additional tuning and feature implementation tasks outlined below, carefully assess your requirements. Then, if you need additional capabilities, such as high-availability or shared file-system configurations, you can judiciously implement additional features starting from the basic configurations. But if you find that the work you have done so far can meet your needs, additional changes are unlikely to be an improvement. They may simply complicate maintenance and administration.
If applications transfer unusually large or unusually uniform amounts of data to the file system, you may be able to improve file system performance by setting additional mount options. See "Tuning I/O Characteristics for Special Needs" for details.
If you need to configure shared access to the file system, see "Accessing File Systems from Multiple Hosts Using SAM-QFS Software" and/or "Accessing File Systems from Multiple Hosts Using NFS and SMB/CIFS".
If you need to configure a high-availability QFS file system or SAM-QFS archiving file system, see "Preparing High-Availability Solutions".
If you need to configure a SAM-QFS archiving file system to share archival storage hosted at a remote location, see "Configuring SAM-Remote".
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".