QFS file systems are the basic building blocks of all Oracle HSM solutions. Used on their own, they offer high performance, effectively unlimited capacity, and support for extremely large files. When used with Oracle Hierarchical Storage Manager and suitably configured archival storage, they become Oracle HSM archiving file systems. Both archiving and non-archiving QFS file systems can then form the basis of more complex, multiple-host and high-availability configurations. So this chapter outlines the basic tasks involved when creating and configuring them.
Creating and configuring a basic QFS file system is straightforward. You save the file system information in a Master Configuration File (mcf
) and create the corresponding file system using the /opt/SUNWsamfs/sbin/sammkfs
command. Then create a mount point, add the file system's mount parameters to the host's virtual file system configuration, and mount the new file system. The process can be performed using either the graphical Oracle HSM Manager interface or a text editor and commandline terminal. But in the examples, we use the editor-and-commandline method to make the underlying process explicit and thus easier to understand.
When creating a QFS file system, proceed as follows:
Select the type of QFS file system that best meets your needs.
In most cases, a general-purpose file system that stores data and metadata on the same devices is the best choice. When file systems are large and seek time must be minimized, a high-performance file system that stores metadata and data on separate, dedicated devices may be preferable.
Configure the required file system.
See "Configure a General-Purpose ms
File System" or "Configure a High-Performance ma
File System".
Mount the file system.
See "Create a Mount Point for the New QFS File System" and "Add the New QFS File System to the Solaris Virtual File System Table".
ms
File SystemLog in to the file-system host as root
. Log in to the global zone if the host is configured with zones.
root@qms1mds:~#
Create the file /etc/opt/SUNWsamfs/mcf
.
The mcf
(master configuration file) is a table of six columns separated by white space, each representing one of the parameters that define a QFS file system: Equipment Identifier
, Equipment Ordinal
, Equipment Type
. Family Set
, Device State
, and Additional Parameters
. The rows in the table represent file-system equipment, which includes both storage devices and groups of devices (family sets).
You can create the mcf
file by selecting options in the Oracle HSM Manager graphical user interface or by using a text editor. In the example below, we use the vi
text editor:
root@qms1mds:~# vi /etc/opt/SUNWsamfs/mcf ~ ~ "/etc/opt/SUNWsamfs/mcf" [New File]
For the sake of clarity, enter column headings as comments.
Comment rows start with a hash sign (#
):
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ----------------
In the Equipment Identifier
field (the first column) of the first row, enter the name of the new file system.
The equipment identifier can contain up to 31 characters, must start with an alphabetic character, and can contain only letters, numbers, and/or underscore (_) characters. In this example, the file system is named qms1
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qms1
In the Equipment Ordinal
field (the second column), enter a number that will uniquely identify the file system.
The equipment ordinal number uniquely identifies all equipment controlled by Oracle HSM. In this example, we use 100
for the qms1
file system:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qms1 100
In the Equipment Type
field (the third column), enter the equipment type for a general-purpose QFS file system, ms
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qms1 100 ms
In the Family Set
field (the fourth column), enter the name of the file system.
The Family Set
parameter defines a group of equipment that are configured together to form a unit, such as a robotic tape library and its resident tape drives or a file system and its component disk devices.
The family set name must have the same value as the equipment identifier. So, in the example, we name the family set qms1
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qms1 100 ms qms1
Enter on
in the Device State
column, and leave the Additional Parameters
column blank.
This row is complete:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qms1 100 ms qms1 on
Start a new row. Enter the identifier for one of the disk devices that you selected in the Equipment Identifier
field (the first column), and enter a unique number in the Equipment Ordinal
field (the second column).
In the example, we indent the device line to emphasize the fact that the device is part of the qms1
file-system family set and increment the equipment number of the family set to create the device number, in this case 101
:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------------- qms1 100 ms qms1 on /dev/dsk/c1t3d0s3 101
In the Equipment Type
field of the disk device row (the third column), enter the equipment type for a disk device, md
.
For more information on device identifiers, see the "Glossary of Equipment Types".
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------- --------- --------- --------- ------ ----------------
qms1 100 ms qms1 on
/dev/dsk/c1t3d0s3 101 md
Enter the family set name in the Family Set
field of the disk device row (the fourth column), enter on
in the Device State
field (the fifth column), and leave the Additional Parameters
field (the sixth column) blank.
The family set name qms1
identifies the disk equipment as part of the hardware for the qms1
file system.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------------- qms1 100 ms qms1 on /dev/dsk/c1t3d0s3 101 md qms1 on
Now add entries for any remaining disk devices, save the file, and quit the editor.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------- --------- --------- --------- ------ ---------------- qms1 100 ms qms1 on /dev/dsk/c1t3d0s3 101 md qms1 on /dev/dsk/c1t4d0s5 102 md qms1 on :wq root@qms1mds:~#
Check the mcf
file for errors by running the sam-fsd
command.
The sam-fsd
command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:
root@qms1mds:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device:
root@qms1mds:~# sam-fsd Problem in mcf file /etc/opt/SUNWsamfs/mcf for filesystem qms1 sam-fsd: Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed a letter o
instead of a 0 in the slice number part of the equipment name for device 102
, the second md
device:
qms1 100 ms qms1 on
/dev/dsk/c0t0d0s0 101 md qms1 on
/dev/dsk/c0t3d0so 102 md qms1 on
If the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
The example is a partial listing of error-free output:
root@qms1mds:~# sam-fsd Trace file controls: sam-amld /var/opt/SUNWsamfs/trace/sam-amld cust err fatal ipc misc proc date size 10M age 0 sam-archiverd /var/opt/SUNWsamfs/trace/sam-archiverd cust err fatal ipc misc proc date module size 10M age 0 sam-catserverd /var/opt/SUNWsamfs/trace/sam-catserverd cust err fatal ipc misc proc date module size 10M age 0 ... Would start sam-archiverd() Would start sam-stagealld() Would start sam-stagerd() Would start sam-amld()
Tell the Oracle HSM software to reread the mcf
file and reconfigure itself accordingly. Use the command samd
config
.
root@qms1mds:~# samd config Configuring SAM-FS root@qms1mds:~#
If the command samd
config
fails with the message You need to run /opt/SUNWsamfs/util/SAM-QFS-post-install
, you forgot to run the post-installation script when you installed the software. Run it now.
root@qms1mds:~# /opt/SUNWsamfs/util/SAM-QFS-post-install - The administrator commands will be executable by root only (group bin). If this is the desired value, enter "y". If you want to change the specified value enter "c". ... root@qms1mds:~#
Create the file system using the /opt/SUNWsamfs/sbin/sammkfs
command and the family set name of the file system.
The Oracle HSM software uses dual-allocation and default Disk Allocation Unit (DAU) sizes for md
devices. This is a good choice for a general-purpose file system, because it can accommodate both large and small files and I/O requests. In the example, we accept the defaults:
root@qms1mds:~# sammkfs qms1 Building 'qms1' will destroy the contents of devices: /dev/dsk/c1t3d0s3 /dev/dsk/c1t4d0s5 Do you wish to continue? [y/N]yes total data kilobytes = ...
If we needed to specify a non-default DAU size that better met our I/O requirements, we could use the -a
option to specify the number of 1024-byte blocks in the desired DAU (for additional information, see the sammkfs
(1m) man page):
root@qms1mds:~# sammkfs -a 16 qms1
If you are using Oracle Hierarchical Storage Manager to set up an archiving file system, go to "Configuring Oracle HSM Archiving File Systems" now.
Otherwise, create a mount point for the new file system.
ma
File SystemOnce the Oracle HSM software is installed on the file-system host, you configure an ma
file system as described below.
Log in to the file system host as root
. Log in to the global zone if the host is configured with zones.
root@qma1mds:~#
Select the disk devices that will hold the metadata.
Select the disk devices that will hold the data.
Create the mcf
file.
You can create the mcf
file by selecting options in the Oracle HSM Manager graphical user interface or by using a text editor. In the example below, we use the vi
text editor:
root@qma1mds:~# vi /etc/opt/SUNWsamfs/mcf ~ "/etc/opt/SUNWsamfs/mcf" [New File]
For the sake of clarity, enter column headings as comments.
Comment rows start with a hash sign (#
):
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------ ------ -----------------
Create an entry for the file-system family set.
In this example, we identify the file system as qma1
, increment the equipment ordinal to 200
, set the equipment type to ma
, set the family set name to qma1
, and set the device state on
:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- ------ ------ -----------------
qma1 200 ma qma1 on
Add an entry for each metadata device. Enter the identifier for the disk device you selected in the equipment identifier column, set the equipment ordinal, and set the equipment type to mm
.
Add enough metadata devices to hold the metadata required for the size of the file system. In the example, we add a single metadata device:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- ------ ------ -----------------
qma1 200 ma qma1 on
/dev/dsk/c0t0d0s0 201 mm qma1 on
Now add entries for the data devices, save the file, and quit the editor.
These can be either md
, mr
, or striped-group (g
XXX
) devices. For this example, we will specify md
devices:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- ------ ------ ----------------- qma1 200 ma qma1 on /dev/dsk/c0t0d0s0 201 mm qma1 on /dev/dsk/c0t3d0s0 202 md qma1 on /dev/dsk/c0t3d0s1 203 md qma1 on :wq root@qma1mds:~#
Check the mcf
file for errors by running the sam-fsd
command.
The sam-fsd
command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:
root@qma1mds:~# sam-fsd
If the sam-fsd
command finds an error in the mcf
file, edit the file to correct the error and recheck as described in the preceding step.
In the example below, sam-fsd
reports an unspecified problem with a device:
root@qma1mds:~# sam-fsd Problem in mcf file /etc/opt/SUNWsamfs/mcf for filesystem qma1 sam-fsd: Problem with file system devices.
Usually, such errors are the result of inadvertent typing mistakes. Here, when we open the mcf
file in an editor, we find that we have typed an exclamation point (!
) instead of a 1 in the slice number part of the equipment name equipment name for device 202
, the first md
device:
sharefs1 200 ma qma1 on
/dev/dsk/c0t0d0s0 201 mm qma1 on
/dev/dsk/c0t0d0s! 202 md qma1 on
/dev/dsk/c0t3d0s0 203 md qma1 on
If the sam-fsd
command runs without error, the mcf
file is correct. Proceed to the next step.
The example is a partial listing of error-free output:
root@qma1mds:~# sam-fsd Trace file controls: sam-amld /var/opt/SUNWsamfs/trace/sam-amld cust err fatal ipc misc proc date size 10M age 0 sam-archiverd /var/opt/SUNWsamfs/trace/sam-archiverd cust err fatal ipc misc proc date module size 10M age 0 sam-catserverd /var/opt/SUNWsamfs/trace/sam-catserverd cust err fatal ipc misc proc date module size 10M age 0 ... Would start sam-archiverd() Would start sam-stagealld() Would start sam-stagerd() Would start sam-amld()
Create the file system using the /opt/SUNWsamfs/sbin/sammkfs
command and the family set name of the file system.
In the example, we create the file system using the default Disk Allocation Unit (DAU) size for ma
file systems with md
devices, 64
kilobytes:
root@qma1mds:~# sammkfs qma1 Building 'qma1' will destroy the contents of devices: /dev/dsk/c0t0d0s0 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s1 Do you wish to continue? [y/N]yes total data kilobytes = ...
The default is a good, general-purpose choice. But if the file system were to primarily support smaller files or applications that read and write smaller amounts of data, we could also specify a DAU size of 16
or 32
kilobytes. To specify a 16-kilobytes DAU, we would use the sammkfs
command with -a
option:
root@qma1mds:~# sammkfs -a 16 qma1
The DAU for mr
devices and g
XXX
striped groups is fully adjustable within the range 8-65528
kilobytes, in increments of 8 kilobytes. The default is 64
kilobytes for mr
devices and 256
kilobytes for g
XXX
striped groups. See the sammkfs
man page for additional details.
If you are using Oracle Hierarchical Storage Manager to set up an archiving file system, go to "Configuring Oracle HSM Archiving File Systems" now.
Otherwise, create a mount point for the new file system.
If you are not currently logged in, log in to the file system host as root
. Log in to the global zone if the host is configured with zones.
root@qmx1mds:~#
Create a mount-point directory for the new file system.
root@qmx1mds:~# mkdir /hsm/qmx1 root@qmx1mds:~#
Set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /qm
x
1
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
root@qmx1mds:~# mkdir /hsm/qmx1 root@qmx1mds:~# chmod 755 /hsm/qmx1 root@mds1:~#
If you are not currently logged in, log in to the file system host as root
. Log in to the global zone if the host is configured with zones.
root@qmx1mds:~#
Back up the operating system's /etc/vfstab
file.
root@qmx1mds:~# cp /etc/vfstab /etc/vfstab.backup
. Open the /etc/vfstab
file in a text editor, start a new line, and enter the name of the file system in the first column, under the heading File
Device
to
Mount
.
root@qmx1mds:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... /qmx1
In the second column, under the heading Device
to
fsck
, enter a hyphen (-
).
root@qmx1mds:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... /qmx1 -
In the third column, under the heading Mount
Point
, enter the path to the mount point that you created for the file system.
root@qmx1mds:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... /qmx1 - /hsm/qmx1
In the fourth column, under the heading System
Type
, enter the file system type, samfs
.
root@qmx1mds:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... /qmx1 - /hsm/qmx1 samfs
In the fifth column, under the heading fsck
Pass
, enter a hyphen (-
).
root@qmx1mds:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... /qmx1 - /hsm/qmx1 samfs -
In the sixth column, under the heading Mount at Boot
, enter no
.
root@qmx1mds:~# vi /etc/vfstab # File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... qmx1 - /hsm/qmx1 samfs - no
To specify round-robin allocation, add the stripe=0
mount option.
Setting mount options in /etc/vfstab
is usually simplest during initial file system configuration. But you can also set most options in an optional /etc/opt/SUNWsamfs/samfs.cmd
file or from the command line. See the samfs.cmd
(4) and mount_samfs
(1m) man pages for details.
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... qmx1 - /hsm/qmx1 samfs - no stripe=0
To specify striped allocation, add the stripe=
stripe-width
mount option, where stripe-width
is an integer in the range [1-255]
that represents the number of Disk Allocation Units (DAUs) that should be written to each disk in the stripe.
When striped allocation is specified, data is written to devices in parallel. So, for best performance, choose a stripe width that fully utilizes the bandwidth available with your storage hardware. Note that the volume of data transferred for a given stripe width depends on how hardware is configured. For md
devices implemented on single disk volumes, a stripe width of 1
writes one 64-kilobyte DAU to each of two disks for a total of 128 kilobytes. For md
devices implemented on 3+1 RAID 5 volume groups, the same stripe width transfers one 64-kilobyte DAU to each of the three data disks on each of two devices, for a total of six DAUs or 384 kilobytes per transfer. In our example, we set the stripe width to one DAU:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - ... qmx1 - /hsm/qmx1 samfs - no stripe=1
You can try adjusting the stripe width to make better use of the available hardware. In the Mount Options
field for the file system, set the stripe=
n
mount option, where n
is a multiple of the DAU size specified for the file system. Test the I/O performance of the file system and readjust the setting as needed.
When you set stripe=0
, Oracle HSM writes files to devices using round-robin allocation. Each file is completely allocated on one device until that device is full. Round-robin is preferred for shared file systems and multistream environments.
In the example, we have determined that the bandwidth of our RAID-5 volume groups are under-utilized with a stripe width of one, so we try stripe=2
:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... qmx1 - /hsm/qmx1 samfs - no ...,stripe=2
Otherwise, save the vfstab
file.
... qmx1 - /hsm/qmx1 samfs - no stripe=1 :wq root@qmx1mds:~#
Mount the new file system using the Solaris mount
command.
root@qmx1mds:~# mount /qfxms
The basic file system is now complete and ready to use.
If you are using Oracle Hierarchical Storage Manager to set up an archiving file system, see "Configuring Oracle HSM Archiving File Systems".
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Archiving file systems combine one or more QFS ma
- or ms
-type file systems with archival storage and Oracle Hierarchical Storage Manager software. When users and applications create or modify files stored in the primary file-system disk cache, the Oracle HSM software automatically archives a specified number of file copies on specified archival storage media. Archival media can include magnetic or solid-state disk devices, removable magnetic tapes or optical disks, and/or Oracle Storage Cloud containers.
The Oracle HSM software integrates archival storage into basic file-system operations. So little-used, primary cache-resident files can be released to free primary disk space without deleting any data from the file system as a whole. Files can be maintained in multiple copies on varied media for maximum redundancy and continuous data protection.
To configure an archiving file system, carry out the tasks below:
Oracle HSM archiving file systems can copy files from the primary file-system disk cache to any of three types of archival media: solid-state or magnetic disk, magnetic tape stored in a robotic library, or cloud storage. To configure Oracle HSM any of these types of storage, perform the corresponding task listed below:
If you have not already done so, allocate the devices and configure the file systems for the archival disk volumes. See "Configure Archival Disk Storage".
Create the /etc/opt/SUNWsamfs/diskvols.conf
file in a text editor, and assign a volume serial number (VSN) to each file system. For each file system, start a new line consisting of the desired volume serial number, white space, and the path to the file-system mount point. Then save the file.
In the example, we have configured the host to use fifteen NFS file systems as disk-based archival volumes, DISKVOL1
to DISKVOL15
. All are mounted on the /diskvols/
directory:
root@mds1:~# vi /etc/opt/SUNWsamfs/diskvols.conf # Volume # Serial Resource # Number Path # ------ --------------------- DISKVOL1 /diskvols/DISKVOL1 DISKVOL2 /diskvols/DISKVOL2 ... DISKVOL15 /diskvols/DISKVOL15
If you plan use Oracle Storage Cloud services as Oracle HSM logical media, configure Oracle HSM for the services now.
Otherwise, configure network-attached robotic libraries, removable media, and drives.
Cloud libraries (equipment type cr) and cloud media volumes (media type cl) are the Oracle HSM interface to public and private storage clouds. Storage clouds are abstract, network services that provide an agreed level of service rather than a set of defined physical resources. Oracle HSM cloud libraries and media let you use cloud resources in the same way that you use a removable media library.
For each cloud library that you plan to use with Oracle HSM, carry out the following tasks:
If you have not done so, carry out the required preliminary configuration steps, as described in "Enable Oracle HSM Cloud Libraries".
When you initially configure cloud storage accounts and, periodically, thereafter, you must supply the software with current authentication passwords. The software stores them securely in encrypted form. For each account that you intend to use with Oracle HSM, proceed as follows:
Before proceeding further, make sure that you have the password for the cloud user account that you have provided for the Oracle HSM software.
If you are using Oracle Storage Cloud, this will be the Storage_Administrator account password that you assigned. See "Create Oracle Storage Cloud Service User Accounts for Oracle HSM". Otherwise, your OpenStack Swift administrator should have provided this password.
Log in to the Oracle HSM metadata server host as the user that manages the cloud storage.
The cloud storage management user can be root
or a less privileged account configured on the Oracle HSM metadata server host. In the example, the host name is mds1
. We log in as user root
:
root@mds1:~#
Create a password file to hold the password for the cloud user account. Use the command sam-cloudd
-p
path
/
filename
, where:
path
is the absolute path to the directory where you intend to store the password.
filename
is the name of the file that is to hold the password.
The command prompts you for the password that the file will store.
In the example, we create the file /etc/opt/SUNWsamfs/
ocld1auth
:
root@mds1:~# sam-cloudd -p /etc/opt/SUNWsamfs/ocld1auth Password:
At the prompt, enter the password for the Oracle Storage Cloud Storage_Administrator
user account, and confirm it when prompted.
The sam-cloudd
-p
path
/
filename
command encrypts the password and stores the result in the specified file. In the example, the string P^ssw0rd
represents the account password:
root@mds1:~# sam-cloudd -p /etc/opt/SUNWsamfs/ocld1auth Password: P^ssw0rd Password: P^ssw0rd root@mds1:~#
If you are warned that the specified file already exists, confirm that the file can be overwritten.
The cloud library parameters file configures storage cloud resources as a virtual, network-attached, automated library of removable media. For each cloud library that you need to configure, proceed as follows:
Log in to the file system metadata server host as root
.
In the example, the host name is mds1
:
root@mds1:~#
Make sure that you have the user name for Oracle HSM's administrative user account and the domain ID of the domain to which it belongs.
You made a note of this information when carrying out the instructions in "Provide Storage Cloud Resources".
Choose a family set name for the cloud library.
The family set name of a cloud library is also the name of the parameters file that defines the library, the value of the name parameter in the parameters file, the prefix for the volume serial numbers (VSNs) assigned to cloud media volumes, and the equipment identifier for the library in the master configuration file (mcf
).
In the examples, the family set name simply combines the abbreviation cl (cloud library) with the mcf
equipment number that we plan to use for the library: cl800
.
Create the parameters file for the cloud library. In a text editor, open the file /etc/opt/SUNWsamfs
/
file-name
, where file-name
is the family set name that you are assigning to the cloud resource.
In the example, we use the vi
editor to create the file /etc/opt/SUNWsamfs/
cl800
and add a descriptive comment:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800
Specify the type of cloud container that this cloud library will use. Enter a line of the form type
=
container_type
, where container_type
is one of the following:
oracle-archive
specifies Oracle Storage Cloud archive-storage containers
oracle-object
specifies Oracle Storage Cloud object-storage containers
swift-object
specifies standard OpenStack Swift object-storage containers
See "Create Oracle Storage Cloud Service User Accounts for Oracle HSM" for advice on selecting Oracle Storage Cloud containers.
In the first example, we specify oracle-archive
containers:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800 type = oracle-archive
In the second example, we specify swift-object containers:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl810
# Oracle Storage Cloud parameters file for library cl810
type = swift-object
Start a new line for the storage cloud Uniform Resource Locator (URL). Enter url
=
https://cloud-service-URL
, where cloud-service-URL
is the URL assigned by your cloud service (Oracle Storage Cloud or an OpenStack private cloud).
If you are using Oracle Storage Cloud, cloud-service-URL
takes the form service_name-identity_domain_id.storage.oraclecloud.com
, where:
service_name
is the name of the service that provides the storage for this cloud library.
identity_domain_id
is the Oracle-assigned identifier for the identity domain, the authorization and authentication domain for your part of the multiple-tenancy cloud environment.
In the first example, we are using Oracle Storage Cloud. We consult our notes in the hsm-ocloud-info.txt
file for this information. The service name is example1234
and the identity domain ID is usexamplecom49808
:
root@mds1:~# cat hsm-cloud-info.txt service name: example1234 identity domain id: usexamplecom49808 user name: hsmlibrary800
In the /etc/opt/SUNWsamfs/cl800
file, we enter the URL shown:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800 type = oracle-archive url = https://example1234-usexamplecom49808.storage.oraclecloud.com
In the second example, we are using an OpenStack private cloud. We enter the supplied by the Swift administrator, https://ohsmcl810.cloud.example.com
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl810
# Oracle Storage Cloud parameters file for library cl810
type = swift-object
url = https://ohsmcl810.cloud.example.com
Add a new line of the form domain_id = domain_id
, where domain_id
is the administrative domain that contains the Oracle HSM user and resources.
If you are using Oracle Storage Cloud, domain_id
takes the form service_name-identity_domain_id
, where:
service_name
is the name of the service that provides the storage for this cloud library
identity_domain_id
is the Oracle-assigned identifier for the authorization and authentication domain for your part of the multiple-tenancy cloud environment
In the first example, we are using Oracle Storage Cloud. So the domain_id
is example1234-usexamplecom49808
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800 type = oracle-archive url = https://example1234-usexamplecom49808.storage.oraclecloud.com domain_id = example1234-usexamplecom49808
In the second example, we are using an OpenStack private cloud. So the domain_id
is the string supplied by the administrator, ohsm
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl810
# Oracle Storage Cloud parameters file for library cl810
type = swift-object
url = https://ohsmcl810.cloud.example.comm
domain_id = ohsm
Supply the user name for the Oracle Storage Cloud Storage_Administrator
account that you created for the use of the Oracle HSM software. Enter a line of the form username
=
name
, where name
is the user name.
In the first example, we are using Oracle Storage Cloud. So the username is the name that we configured under the Storage_Administrator
role, hsmlibrary800
(see "Create Oracle Storage Cloud Service User Accounts for Oracle HSM"):
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
# Oracle Storage Cloud parameters file for library cl800
type = oracle-archive
url = https://example1234-usexamplecom49808.storage.oraclecloud.com
domain_id = example1234-usexamplecom49808
username = hsmlibrary800
In the second example, we are using an OpenStack private cloud. So the domain_id
is the value supplied by the administrator, ohsm
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl810
# Oracle Storage Cloud parameters file for library cl810
type = swift-object
url = https://ohsmcl810.cloud.example.comm
username = ohsmcl810
On a new line, enter the path to the password file that will authenticate Oracle HSM as the authorized user of the cloud storage account. Enter a line of the form password_file
=
path
file
, where:
path
is the absolute path to the password file that you created with the sam-cloudd
-p
command.
file
is the name of the file that contains the encrypted account password.
In the example, the password file is /etc/opt/SUNsamfs/
ocld1auth
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800 type = oracle-archive url = https://example1234-usexamplecom49808.storage.oraclecloud.com domain_id = example1234-usexamplecom49808 username = hsmlibrary800 password_file = /etc/opt/SUNsamfs/ocld1auth
Specify a volume serial number (VSN) prefix that will uniquely identify virtual volumes created by this cloud service configuration. Enter a line of the form name
=
string
, where string
is the prefix that you wish to use.
Prefixes consist of 4 to 20 letters and numerals. In the example, we use the family set name of the library, cl800
, as the prefix::
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800 type = oracle-archive url = https://example1234-usexamplecom49808.storage.oraclecloud.com domain_id = example1234-usexamplecom49808 username = hsmlibrary800 password_file = /etc/opt/SUNsamfs/ocld1auth name = cl800
Specify the number of logical drives that the cloud service will make available to Oracle HSM. Enter a line of the form drives
=
number
, where number
is an integer in the range [1-4
].
The drives
parameter sets the number of concurrent archiver/stager requests that the cloud service supports. The default is 4
. In the example, we do not need the full, default bandwidth, so we specify 2
to reduce memory usage:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 # Oracle Storage Cloud parameters file for library cl800 type = oracle-archive url = https://example1234-usexamplecom49808.storage.oraclecloud.com domain_id = example1234-usexamplecom49808 username = hsmlibrary800 name = cl800 drives = 2
Save the file, and close the editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
# Oracle Storage Cloud parameters file for library cl800
type = oracle-archive
url = https://example1234-usexamplecom49808.storage.oraclecloud.com
domain_id = example1234-usexamplecom49808
username = hsmlibrary800
name = cl800
drives = 2
:wq
root@mds1:~#
If you need to encrypt data before sending it to the cloud, configure cloud encryption now.
Otherwise, configure network-attached robotic libraries, removable media, and drives.
If you need to encrypt data before sending it to the cloud, carry out the following tasks:
Generate a cloud encryption password file to hold the password to the encryption keystore.
Add encryption parameters to the cloud library parameter file.
If you use Oracle Key Manager (OKM) or Oracle Key Vault (OKV) to manage and secure your keystore, make sure that you have the password for accessing the keystore.
Log in to the file system metadata server host as the user that manages the cloud storage.
The cloud storage management user can be root or a less privileged account configured on the Oracle HSM metadata server host. In the example, the host name is mds1
. We log in as user root
:
root@mds1:~#
Enter the name and path of a password file for the cloud storage account. Use the command sam-cloudd -p path/filename
, where:
path
is the absolute path to the directory where you intend to store the password
filename
is the name of the file that is to hold the password
The command prompts you for the account password.
In the example, we create the file /etc/opt/SUNWsamfs/keystore_auth
:
root@mds1:~# sam-cloudd -p /etc/opt/SUNWsamfs/keystore_auth
Password:
At the prompt, enter the password for the Oracle Storage Cloud Storage_Administrator
user account. If you are warned that the specified file already exists, confirm that the file can be overwritten.
The sam-cloudd -p path/filename
command encrypts the password and stores the result in the specified file.
root@mds1:~# sam-cloudd -p /etc/opt/SUNWsamfs/keystore_auth
Password: ********
root@mds1:~#
Now add encryption parameters to the cloud library parameters file.
In the cloud library parameter file, you configure the Oracle HSM cloud encryption feature by defining the encryption keystore where cryptographic keys and certificates are to be kept and the type of key label that identifies keys within the keystore. The keystore may be either a key management application, such as Oracle Key Manager (OKM) or Oracle Key Vault (OKV), or it can be an encrypted keystore files (KSF), as described in the keystore-file
(7) man page. Key labels let Oracle HSM request encryption or decryption of a specified cloud media volume without directly—and perhaps insecurely—accessing the actual cryptographic keys.
To add the required keystore parameters to the cloud library parameters file, proceed as follows:
If you have not already done so, log in to the file system metadata server host as root
.
In the example, the host name is mds1
:
root@mds1:~#
In a text editor, open the file vi /etc/opt/SUNWsamfs/cloud_library
, where cloud_library
is the family set name that you assigned to the cloud resource when you created the cloud library parameters file.
In the example, we have given the cloud library the family set name cl800
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
# Oracle Storage Cloud parameters file for library cl800
type = oracle-archive
url = https://example1234-usexamplecom49808.storage.oraclecloud.com
domain_id = example1234-usexamplecom49808
username = hsmlibrary800
name = cl800
drives = 2
In the cloud library parameters file, identify the type of keystore that you intend to use. Enter a line of the form keystore_type = type
, where type
is one of the following:
pkcs11
specifies a Public Key Cryptography Standards #11 (Cryptoki) keystore.
file
specifies an encrypted key store file that holds a label, value, and hash value for each key stored.
In the example, we use pkcs11
.
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
# Oracle Storage Cloud parameters file for library cl800
type = oracle-archive
url = https://example1234-usexamplecom49808.storage.oraclecloud.com
domain_id = example1234-usexamplecom49808
username = hsmlibrary800
name = cl800
drives = 2
keystore_type = pkcs11
Next, name the keystore implementation that you intend to use. Enter a line of the form keystore_name = name
, where name
is the PKCS #11 token name of a supported encryption provider. It can have any one of the following values:
KMS
specifies a keystore managed by Oracle Key Manager (OKM) using the Solaris PKCS #11 Key Management Service and a private protocol.
OKV
specifies a keystore managed by Oracle KeyVault using the Key Management Interoperability Protocol (KMIP).
path/file
identifies a keystore file.
In the example, we use KMS.
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
...
name = cl800
drives = 2
keystore_type = pkcs11
keystore_name = KMS
Next, identify the file that stores the password for the encrypted keystore file. Enter a line of the form keystore_password_file = path/file
, where path/file
is the path and file name for the file that you generated to hold the keystore password.
In the example, the file is /etc/opt/SUNWsamfs/keystore_auth
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
...
name = cl800
drives = 2
keystore_type = pkcs11
keystore_name = KMS
keystore_password_file = /etc/opt/SUNWsamfs/keystore_auth
Next, define the way in which the library uses encryption keys and generates key labels. Enter a line of the form keylabel_type = dynamic|static
, where dynamic
or static
specifies the required keying and key-labeling behavior:
When dynamic
keying and key-labeling is specified, the cloud library automatically generates a unique encryption key and key label when it first encrypts each volume. It bases the key label on the Volume Serial Number (VSN) that identifies the newly encrypted volume.
When static
keying and key-labeling is specified, the cloud library uses a single encryption key and key label to encrypt all volumes. The key label must specified by the optional keylabel_name parameter of the cloud library parameter file.
In the example, we specify dynamic keying and key-labeling for cloud library cl800
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800
...
name = cl800
drives = 2
keystore_type = pkcs11
keystore_name = KMS
keystore_password_file = /etc/opt/SUNWsamfs/keystore_auth
keylabel_type = dynamic
If you specified static keying and key-labeling behavior (keylabel_type = static
), supply the text that the cloud library should use for the key label. Enter a line of the form keylabel_name = label_text
, where label_text
is the desired text.
In the example, we have specified static keying and key-labeling for cloud library cl801
, so we set keylabel_name
to our chosen label_text
, HSMcl801Key
:
root@mds1:~# vi /etc/opt/SUNWsamfs/cl801 ... name = cl801 drives = 2 keystore_type = pkcs11 keystore_name = KMS keystore_password_file = /etc/opt/SUNWsamfs/keystore_auth keylabel_type = static keylabel_name = HSMcl801Key
Save the cloud library parameters file, and close the editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/cl800 ... name = cl800 drives = 2 keystore_type = pkcs11 keystore_name = KMS keystore_password_file = /etc/opt/SUNWsamfs/keystore_auth keylabel_type = dynamic :wq root@mds1:~#
Next, configure network-attached robotic libraries, removable media, and drives.
Carry out the tasks listed below:
If you have not yet configured a SCSI- or SAN-attached library, go to "Configure Direct-Attached Libraries".
Check and, if necessary, adjust default drive timing values.
You can configure an Oracle StorageTek ACSLS network-attached library as follows or you can use the Oracle HSM Manager graphical user interface to automatically discover and configure the library (for instructions, see the Oracle HSM Manager online help).
Proceed as follows:
Log in to the Oracle HSM server host as root
.
root@mds1:~#
Change to the /etc/opt/SUNWsamfs
directory.
root@mds1:~# cd /etc/opt/SUNWsamfs
In a text editor, start a new file with a name that corresponds to the type of network-attached library that you are configuring.
In the example, we start a parameters file for an Oracle StorageTek ACSLS network-attached library:
root@mds1:~# vi /etc/opt/SUNWsamfs/acsls1params # Configuration File for an ACSLS Network-Attached Tape Library 1
Enter the parameters and values that the Oracle HSM software will use when communicating with the ACSLS-attached library.
The Oracle HSM software uses the following Oracle StorageTek Automated Cartridge System Application Programming Interface (ACSAPI) parameters to control ACSLS-managed libraries (for more information, see the stk
man page):
access=
user-id
specifies an optional user identification value for access control. By default, there is no user identification-based access control.
hostname=
hostname
specifies the hostname of the server that runs the StorageTek ACSLS interface.
portnum=
portname
specifies the port number that is used for communication between ACSLS and Oracle HSM software.
ssihost=
hostname
specifies the hostname that identifies a multihomed Oracle HSM server to the network that connects to the ACSLS host. The default is the name of the local host.
ssi_inet_port=
ssi-inet-port
specifies the fixed firewall port that the ACSLS Server System Interface must use for incoming ACSLS responses. Specify either 0
or a value in the range [1024-65535
]. The default, 0
, allows dynamic port allocation.
csi_hostport=
csi-port
specifies the Client System Interface port number on the ACSLS server to which the Oracle HSM sends its ACSLS requests. Specify either 0
or a value in the range [1024-65535
]. The default, 0
, causes the system to query the port mapper on the ACSLS server for a port.
capid=(acs=
acsnum
,
lsm=
lsmnum
,
cap=
capnum
)
specifies the ACSLS address of a cartridge access port (CAP), where acsnum
is the Automated Cartridge System (ACS) number for the library, lsmnum
is the Library Storage Module (LSM) number for the module that holds the CAP, and capnum
is the identifying number for the desired CAP. The complete address is enclosed in parentheses.
capacity=(
index-value-list
)
specifies the capacities of removable media cartridges, where index-value-list
is a comma-delimited list of index
=
value
pairs. Each index
in the list is the index of an ACSLS-defined media type and each value
is the corresponding volume capacity in units of 1024 bytes.
The file /export/home/ACSSS/data/internal/mixed_media/media_types.dat
defines the media-type indices. In general, you only need to supply a capacity entry for new cartridge types or when you need to override the supported capacity.
device-path-name
=
(
acs=
ACSnumber
,
lsm=
LSMnumber
,
panel=
Panelnumber
,
drive=
Drivenumber
)
[
shared
]
specifies the ACSLS address of a drive that is attached to the client, where device-path-name
identifies the device on the Oracle HSM server, acsnum
is the Automated Cartridge System (ACS) number for the library, lsmnum
is the Library Storage Module (LSM) number for the module that controls the drive, Panelnumber
is the identifying number for the panel where the drive is installed, and Drivenumber
is the identifying number of the drive. The complete address is enclosed in parentheses.
Adding the optional shared
keyword after the ACSLS address lets two or more Oracle HSM servers share the drive as long as each retains exclusive control over its own media. By default, a cartridge in a shared drive can be idle for 60 seconds before being unloaded.
In the example, we identify acslserver1
as the ACSLS host, limit access to sam_user
, specify dynamic port allocation, and map a cartridge access port and two drives:
root@mds1:~# vi /etc/opt/SUNWsamfs/acsls1params # Configuration File for an ACSLS Network-Attached Tape Library 1 hostname = acslserver1 portnum = 50014 access = sam_user ssi_inet_port = 0 csi_hostport = 0 capid = (acs=0, lsm=1, cap=0) /dev/rmt/0cbn = (acs=0, lsm=1, panel=0, drive=1) /dev/rmt/1cbn = (acs=0, lsm=1, panel=0, drive=2)
Save the file and close the editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/acsls1params
# /etc/opt/SUNWsamfs/acslibrary1
# Configuration File for an ACSLS Network-Attached Tape Library
...
/dev/rmt/0cbn = (acs=0, lsm=1, panel=0, drive=1)
/dev/rmt/1cbn = (acs=0, lsm=1, panel=0, drive=2)
:wq
root@mds1:~#
If the library or the application software uses non-standard labels for barcoded removable media, configure labeling behavior now.
If drives or application software are known to be incompatible with Oracle HSM defaults, set drive timing values now.
Otherwise, go to "Configure the Archiving File System".
Oracle HSM identifies tape volumes using six-character, ANSI-standard labels written on the tape media itself. If the library holds a barcode reader and barcoded tape cartridges, Oracle HSM can automatically label media using the first or last six characters of the corresponding barcode. Barcodes of up to 31 characters are supported. If tape media are already labeled, Oracle HSM can use the existing labels.
By default, Oracle HSM automatically labels the media in the library with the first six characters of the cartridge barcode. To configure alternative behavior or restore the default, proceed as follows:
Log in to the Oracle HSM host as root
.
root@mds1:~#
If you require a non-default behavior or if you have previously overridden the default and need to reset it, open the file /etc/opt/SUNWsamfs/defaults.conf
in a text editor.
See the defaults.conf
(4) man page for additional information on this file.
In the example, we open the file in the vi
editor:
root@mds1:~# vi /opt/SUNWsamfs/examples/defaults.conf ...
Locate the line labels
=
, if present, or add it if it is not present.
In the example, we add the directive:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. ... labels =
To re-enable the default, automatic labeling based on the first six characters of the barcode, set the value of the labels
directive to barcodes
. Save the file, and close the editor.
The Oracle HSM software now automatically relabels an unlabeled tape media with the first six characters of the cartridge's barcode:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf ... labels = barcodes :wq root@mds1:~#
To enable automatic labeling based on the last six characters of the barcode, set the value of the labels
directive to barcodes_low
. Save the file, and close the editor.
When the labels
directive is set to barcodes_low
, the Oracle HSM software automatically labels media using the last six characters of the cartridge's barcode:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf ... labels = barcodes_low :wq root@mds1:~#
To configure Oracle HSM to read existing labels from the tape media, set the value of the labels
directive to read
. Save the file, and close the editor.
When the labels
directive is set to read
, the Oracle HSM software ignores the barcodes and uses the existing labels. It will not automatically relabel tapes.
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf ... labels = read idle_unload = 0 ... :wq root@mds1:~#
If drives or application software are known to be incompatible with Oracle HSM defaults, set drive timing values now.
Otherwise, go to "Configure the Archiving File System".
By default, the Oracle HSM software sets drive timing parameters as follows:
The minimum time that must elapse before a specified device type can dismount media is 60
seconds.
The amount of time that Oracle HSM software waits before issuing new commands to a library that is responding to a SCSI unload
command is 15
seconds.
The amount of time that Oracle HSM software waits before unloading an idle drive is 600
seconds (10 minutes).
The amount of time that Oracle HSM software waits before unloading an idle drive that is shared by two or more Oracle HSM servers is 600
seconds (10 minutes).
To change the default timing values, proceed as follows:
If you are not logged in, log in to the Oracle HSM host as root
.
root@mds1:~#
Open the /etc/opt/SUNWsamfs/
defaults.conf
file in a text editor.
In the example, we use the vi
editor:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character and change the value.
...
If required, specify the minimum time that must elapse before a specified device type can dismount media. In the defaults.conf
file, add a directive of the form equipment-type
_delay
=
number-of-seconds
, where equipment-type
is the two-character, Oracle HSM code that identifies the drive type that you are configuring and number-of-seconds
is an integer representing the default number of seconds for this device type.
See Appendix A, "Glossary of Equipment Types" for listings of equipment type codes and corresponding equipment. In the example, we change the unload delay for LTO drives (equipment type li
) from the default value (60
seconds) to 90
seconds):
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... li_delay = 90
If required, specify the amount of time that Oracle HSM software waits before issuing new commands to a library that is responding to a SCSI unload
command. In the defaults.conf
file, add a directive of the form equipment-type
_unload
=
number-of-seconds
, where equipment-type
is the two-character, Oracle HSM code that identifies the drive type that you are configuring and number-of-seconds
is an integer representing the number of seconds for this device type.
See Appendix A, "Glossary of Equipment Types" for listings of equipment type codes and corresponding equipment. Set the longest time that the library might need when responding to the unload
command in the worst-case. In the example, we change the unload delay for LTO drives (equipment type li
) from the default value (15
seconds) to 35
seconds:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... li_delay = 90 li_unload = 35
If required, specify the amount of time that Oracle HSM software waits before unloading an idle drive. In the defaults.conf
file, add a directive of the form idle_unload
=
number-of-seconds
, where number-of-seconds
is an integer representing the specified number of seconds.
Specify 0
to disable this feature. In the example, In the example, we disable this feature by changing the default value (600
seconds) to 0
:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... li_delay = 90 li_unload = 35 idle_unload = 0
If required, specify the amount of time that Oracle HSM software waits before unloading a shared idle drive. In the defaults.conf
file, add a directive of the form shared_unload
=
number-of-seconds
, where number-of-seconds
is an integer representing the specified number of seconds.
You can configure Oracle HSM servers to share removable-media drives. This directive frees drives for use by other servers when the server that owns the loaded media is not actually using the drive. Specify 0
to disable this feature. In the example, we disable this feature by changing the default value (600
seconds) to 0
:
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character and change the value. ... idle_unload = 600 li_delay = 90 li_unload = 35 idle_unload = 0 shared_unload = 0
Save the file, and close the editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character and change the value.
...
idle_unload = 600
li_delay = 90
li_unload = 35
idle_unload = 0
shared_unload = 0
:wq
root@mds1:~#
The procedure for creating an archiving file system is identical to creating a non-archiving file system, except that we add devices for storing additional copies of the data files:
Start by configuring a QFS file system. You can configure either a general-purpose ms
or high-performance ma
file system.
While you can use the Oracle HSM Manager graphical user interface to create file systems, for the examples in this section, we use the vi
editor. Here, we create a general purpose, ms
file system with the family set name hqfs1
and the equipment ordinal number 100
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ ----------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t3d0s4 102 md hqfs1 on
To add archival tape storage, start by adding an entry for the library. In the equipment identifier field, enter the device ID for the library and assign an equipment ordinal number.
In this example, the library equipment identifier is /dev/scsi/changer/c1t0d5
. We set the equipment ordinal number to 700
, the range following the range chosen for the QFS file system that we are using as a disk volume:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ ----------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t3d0s4 102 md hqfs1 on /dev/scsi/changer/c1t0d5 700
Set the equipment type to rb
, a generic SCSI-attached tape library, provide a name for the tape library family set, and set the device state on
.
In this example, we are using the library lib1
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ ----------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t3d0s4 102 md hqfs1 on /dev/scsi/changer/c1t0d5 700 rb lib1 on
Optionally, in the Additional Parameters
column, enter the path where the library catalog will be stored.
If you do not opt to supply a catalog path, the software will set a default path for you.
Note that, due to document layout limitations, the example abbreviates the long path to the library catalog var/opt/SUNWsamfs/catalog/lib1cat
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- ------ ------ -----------------
hqfs1 100 ms hqfs1 on
/dev/dsk/c1t3d0s3 101 md hqfs1 on
/dev/dsk/c1t3d0s4 102 md hqfs1 on
/dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat
Next, add an entry for each tape drive that is part of the library family set. Add each drive in the order in which it is physically installed in the library.
Follow the drive order listed in the drive-mapping file that you created in "Determine the Order in Which Drives are Installed in the Library".
In the example, the drives attached to Solaris at /dev/rmt/1
, /dev/rmt/0
, /dev/rmt/2
, and /dev/rmt/3
are, respectively, drives 1
, 2
, 3
, and 4
in the library. So /dev/rmt/1
is listed first in the mcf
file, as device 701
. The tp
equipment type specifies a generic SCSI-attached tape drive:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ ----------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t3d0s4 102 md hqfs1 on /dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat /dev/rmt/1cbn 701 tp lib1 on /dev/rmt/0cbn 702 tp lib1 on /dev/rmt/2cbn 703 tp lib1 on /dev/rmt/3cbn 704 tp lib1 on
To add a cloud storage library, enter the path to the parameters file that defines the equipment in the Equipment
Identifier
field.
In the example, we enter the path and name of the parameters file that we created above, /etc/opt/SUNWsamfs/cl800
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- ------ ------ ----------------
hqfs1 100 ms hqfs1 on
/dev/dsk/c1t3d0s3 101 md hqfs1 on
/dev/dsk/c1t3d0s4 102 md hqfs1 on
/dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat
/dev/rmt/0cbn 701 tp lib1 on
/dev/rmt/1cbn 702 tp lib1 on
/dev/rmt/2cbn 703 tp lib1 on
/dev/rmt/3cbn 704 tp lib1 on
/etc/opt/SUNWsamfs/cl800
For each cloud storage library, enter an equipment number in the Equipment
Ordinal
field.
In the example, we assign the equipment ordinal 800
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- ------ ------ ----------------
hqfs1 100 ms hqfs1 on
/dev/dsk/c1t3d0s3 101 md hqfs1 on
/dev/dsk/c1t3d0s4 102 md hqfs1 on
/dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat
/dev/rmt/0cbn 701 tp lib1 on
/dev/rmt/1cbn 702 tp lib1 on
/dev/rmt/2cbn 703 tp lib1 on
/dev/rmt/3cbn 704 tp lib1 on
/etc/opt/SUNWsamfs/cl800 800
For each cloud storage library, enter cr
(cloud robot) in the Equipment
Type
field.
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- ------ ------ ----------------
hqfs1 100 ms hqfs1 on
/dev/dsk/c1t3d0s3 101 md hqfs1 on
/dev/dsk/c1t3d0s4 102 md hqfs1 on
/dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat
/dev/rmt/0cbn 701 tp lib1 on
/dev/rmt/1cbn 702 tp lib1 on
/dev/rmt/2cbn 703 tp lib1 on
/dev/rmt/3cbn 704 tp lib1 on
/etc/opt/SUNWsamfs/cl800 800 cr
For each cloud storage library, enter the family set name that you chose when configuring the parameters file in the Family
Set
field, and enter a hyphen (-
) in both the Device
State
and Additional
Parameters
fields.
In the example, we use the family set name cl800
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ ---------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t3d0s4 102 md hqfs1 on /dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat /dev/rmt/0cbn 701 tp lib1 on /dev/rmt/1cbn 702 tp lib1 on /dev/rmt/2cbn 703 tp lib1 on /dev/rmt/3cbn 704 tp lib1 on /etc/opt/SUNWsamfs/cl800 800 cr cl800 - -
Finally, if you wish to configure a Oracle HSM historian yourself, add an entry using the equipment type hy
. Enter a hyphen in the family-set and device-state columns and enter the path to the historian's catalog in additional-parameters column.
The historian is a virtual library that catalogs volumes that have been exported from the archive. If you do not configure a historian, the software creates one automatically using the highest specified equipment ordinal number plus one.
Note that the example abbreviates the long path to the historian catalog for page-layout reasons. The full path is /var/opt/SUNWsamfs/catalog/historian_cat
:
root@mds1:~# vi /etc/opt/SUNWsamfs/mcf # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ------ ------ ---------------- hqfs1 100 ms hqfs1 on /dev/dsk/c1t3d0s3 101 md hqfs1 on /dev/dsk/c1t3d0s4 102 md hqfs1 on /dev/scsi/changer/c1t0d5 700 rb lib1 on ...catalog/lib1cat /dev/rmt/0cbn 701 tp lib1 on /dev/rmt/1cbn 702 tp lib1 on /dev/rmt/2cbn 703 tp lib1 on /dev/rmt/3cbn 704 tp lib1 on /etc/opt/SUNWsamfs/cl800 800 cr cl800 - - historian 999 hy - - .../historian_cat
Save the mcf
file, and close the editor.
...
/dev/rmt/3cbn 704 tp lib1 on
/etc/opt/SUNWsamfs/cl800 800 cr cl800 - -
historian 999 hy - - .../historian_cat
:wq
root@mds1:~#
Check the mcf
file for errors by running the sam-fsd
command. Correct any errors found.
The sam-fsd
command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:
root@mds1:~# sam-fsd
Trace file controls:
...
Would start sam-archiverd()
Would start sam-stagealld()
Would start sam-stagerd()
Would start sam-amld()
root@mds1:~#
Tell the Oracle HSM software to reread the mcf
file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary
root@mds1:~# /opt/SUNWsamfs/sbin/samd config Configuring SAM-FS root@mds1:~#
Log in to the file system host as root
. Log in to the global zone if the host is configured with zones.
root@mds1:~#
Create a mount-point directory for the new file system.
root@mds1:~# mkdir /hsm/hqfs1 root@mds1:~#
Set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hqfs1
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
root@mds1:~# mkdir /hsm/hqfs1 root@mds1:~# chmod 755 /hsm/hqfs1 root@mds1:~#
Back up the Solaris /etc/vfstab
file, and open it in a text editor.
In the example, we use the vi
editor.
root@mds1:~# cp /etc/vfstab /etc/vfstab.backup root@mds1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - ... hqfs1 - /hsm/hqfs1 samfs - yes -
Set the high-water mark, the percentage disk cache utilization that causes Oracle HSM to release previously archived files from disk. In the last column of the Oracle HSM file-system entry, enter the mount option high=
percentage
, where percentage
is a number in the range [0-100
].
Set this value based on disk storage capacity, average file size, and an estimate of the number of files that are accessed at any given time. You want to make sure that there is always enough cache space for both new files that users create and archived files that users are currently using. But you also want to retain as many files in the cache as possible, so that you can do as little staging as possible. Handling file requests from the disk cache avoids the overhead associated with mounting removable media volumes or recalling files from an Oracle Storage Cloud service.
If the primary cache is implemented using the latest high-speed disk or solid-state devices or if you are archiving files to the Oracle Storage Cloud, set the high-water mark value at 95%. Otherwise use 80-85%. In the example, we set the high-water mark to 95%:
root@mds1:~# cp /etc/vfstab /etc/vfstab.backup root@mds1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - ... hqfs1 - /hsm/hqfs1 samfs - yes high=95
Set the low-water mark, the percentage disk cache utilization that causes Oracle HSM to stop releasing previously archived files from disk. In the last column of the Oracle HSM file-system entry, enter the mount option low=
percentage
, where percentage
is a number in the range [0-100
].
Set this value based on disk storage capacity, average file size, and an estimate of the number of files that are accessed at any given time. You want to keep as many recently active files in cache as you can, particularly when files are frequently requested and modified and when archive copies are stored in an Oracle Storage Cloud account. This keeps staging-related overhead to a minimum. But you do not want previously cached files to consume space needed for new files and for files that have to be staged to disk.
If the primary cache is implemented using the latest high-speed disk or solid-state devices or if you are archiving files to the Oracle Storage Cloud, set the low-water mark value at 90%. Otherwise use 70-75%. In the example, we set the high-water mark to 90%:
root@mds1:~# cp /etc/vfstab /etc/vfstab.backup root@mds1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - ... hqfs1 - /hsm/hqfs1 samfs - yes high=95,low=90
If your users need to retain some file data in the disk cache when previously archived files are released from disk, enter partial releasing mount options in the last column of the Oracle HSM file-system entry.
Partial releasing lets Oracle HSM leave the first part of a designated file in the disk cache when it releases archived files to recover disk space. This approach gives applications immediate access to the data at the start of the file while the remainder stages from archival media, such as tape. The following mount options govern partial releasing:
maxpartial=
value
sets the maximum amount of file data that can remain in disk cache when a file is partially released to value
, where value
is a number of kilobytes in the range 0-2097152
(0
disables partial releasing). The default is 16
.
partial=
value
sets the default amount of file data that remains in disk cache after a a file is partially released to value
, where value
is a number of kilobytes in the range [0-
maxpartial
]. The default is 16
. But note that the retained portion of a file always uses a kilobytes equal to at least one Disk Allocation Unit (DAU).
partial_stage=
value
sets the minimum amount of file data that must be read before an entire partially released file is staged to value
, where value
is a number of kilobytes in the range [0-
maxpartial
]. The default is the value specified by -o partial
, if set, or 16
.
stage_n_window=
value
sets the maximum amount of data that can be read at any one time from a file that is read directly from tape media, without automatic staging. The specified value
is a number of kilobytes in the range [64-2048000
]. The default is 256
.
For more information on files that are read directly from tape media, see OPTIONS
section of the stage
man page under -n
.
In the example, we set maxpartial
to 128
and partial
to 64
, based on the characteristics of our application, and otherwise accept default values:
root@mds1:~# vi /etc/vfstab #File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - ... hqfs1 - /hsm/hqfs1 samfs - yes ...maxpartial=128,partial=64
If you need to exclude QFS file systems from archiving, add the nosam
mount option to the /etc/vfstab
entry for each.
In the example, the nosam
option is set for the DISKVOL1
file system, which is a disk archive. Here, the nosam
mount option makes sure that archival copies are not themselves archived:
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- ------------------ ------ ---- ------- ------- /devices - /devices devfs - no - ... hqfs1 - /hsm/hqfs1 samfs - yes ...=64 DISKVOL1 - /diskvols/DISKVOL1 samfs - yes nosam server:/DISKVOL2 - /diskvols/DISKVOL2 nfs - yes ... server:/DISKVOL15 - /diskvols/DISKVOL15 nfs - yes
Save the /etc/vfstab
file, and close the editor.
...
server:/DISKVOL15 - /diskvols/DISKVOL15 nfs - yes
:wq
root@mds1:~#
Mount the Oracle HSM archiving file system.
root@mds1:~# mount /hsm/hqfs1
root@mds1:~#
Once archiving file systems have been created and mounted, you can generally address all or most of your archiving requirements with little additional configuration. In most cases, you need do little more than create a text file, archiver.cmd
, that identifies the file systems, specifies the number of archive copies of each of your, and assigns media volumes to each copy.
While the Oracle HSM archiving process does have a number of tunable parameters, you should generally accept the default settings in the absence of well-defined, special requirements. The defaults have been carefully chosen to minimize the number of media mounts, maximize utilization of media, and optimize end-to-end archiving performance in the widest possible range of circumstances. So if you do need to make adjustments, be particularly careful about any changes that unnecessarily restrict the archiver's freedom to schedule work and select media. If you try to micromanaging storage operations, you can reduce performance and overall efficiency, sometimes drastically.
You should, however, enable archive logging in almost all situations. Archive logging is not enabled by default, because the log files can reach excessive sizes if not properly managed (management is covered in the Oracle Hierarchical Storage Manager and StorageTek QFS Software Maintenance and Administration Guide). But, if a file system is ever damaged or lost, the archive log file lets you recover files that cannot otherwise be easily restored. When you configure protection for a file system, the file-system metadata in a recovery point file lets you rapidly rebuild a file system from the data stored in archive copies. But a few files are inevitably archived before the file system is damaged or lost but after the last recovery point is generated. In this situation, the archival media holds valid copies, but, in the absence of file-system metadata, the copies cannot be automatically located. Since the file system's archive log records the volume serial numbers of the media that holds each archive copy and the position of the corresponding tar
file(s) within each volume, you can use tar
utilities to recover these files and fully restore the file system.
To create the archiver.cmd
file and configure the archiving process, proceed as follows:
Log in to the host as root
.
root@mds1:~#
Open a new /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor.
Each line in an archiver.cmd
consists of one or more fields separated by white space (leading white space is ignored).
In the example, we use the vi
editor to open the file and enter a comment:
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
# Configuration file for archiving file systems
At the beginning of the archiver.cmd
file, enter any general archiving directives that you need.
General directives contain the equals (=
) character in the second field or have no additional fields. In most cases, you can use the default values instead of setting general directives (see the GENERAL DIRECTIVES SECTION
of the archiver.cmd
man page for details).
While we could leave this section empty, in the example, we have entered the default values for two general directives to illustrate their form:
The archivemeta = off
directive tells the archiving process that it should not archive metadata.
The examine = noscan
directive tells the archiving process to check for files that need archiving whenever the file system reports that files have changed (the default).
Older versions of Oracle HSM scanned the whole file system periodically. In general, you should not change this directive unless you must do so for compatibility with legacy Oracle HSM configurations.
# Configuration file for archiving file systems #----------------------------------------------------------------------- # General Directives archivemeta = off # default examine = noscan # default
Once you have entered all required general archiving directives, start assigning files to archive sets. On a new line, enter the assignment directive fs =
filesystem-name
, where filesystem-name
is the family set name for a file system defined in the /etc/opt/SUNWsamfs/mcf
file.
The assignment directive maps a set of files in the specified file system to a set of copies on archival media. A set of files can be as large as all file systems or as small as a few files. But, for best performance and efficiency, you should not over-specify. Do not create more archive sets than you need to, as this can cause excessive media mounts, needless repositioning of media, and poor overall media utilization. In most cases, assign one archive set per file system.
In the example, we start the archive-set assignment directive for the archiving file system hqfs1
:
# Configuration file for archiving file systems
#-----------------------------------------------------------------------
# General Directives
archivemeta = off # default
examine = noscan # default
#-----------------------------------------------------------------------
# Archive Set Assignments
fs = hqfs1
On the next line, enable archive logging. Enter the logfile =
path/filename
directive, where path/filename
specifies the location and file name.
As noted above, archives log data are essential for a complete recovery following loss of a file system. So configure Oracle HSM to write the archiver log to a non-Oracle HSM directory, such as /var/adm/
, and save copies regularly. While you can create a global archiver.log
that records archiver activity for all file systems together, configuring a log for each file system makes it easier to search the log during file recovery. So, in the example, we specify /var/adm/hqfs1.archiver.log
here, with the file-system assignment directives:
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
...
#-----------------------------------------------------------------------
# Archive Set Assignments
fs = hqfs1
logfile = /var/adm/hqfs1.archiver.log
Next, assign any files that should never be archived to the special no_archive
set.
Use a directory path and, optionally, archive set assignment parameters to identify the files that should be included in the no_archive
set. See the archiver.cmd
(4) man page for details.
In the example, the no_archive
set includes all files in the hqfs1
file system that reside on a path matching the regular expression specified by the -name
parameter. The regular expression matches temporary and backup files found under the path /hsm/hqfs1/data/
:
... #----------------------------------------------------------------------- # Archive Set Assignments fs = hqfs1 logfile = /var/adm/hqfs1.archiver.log no_archive . -name \/hsm\/hqfs1\/data\/((tmp|bak)\/.*)|([.A-Za-z0-9_-]+\.tmp)
On the next line, organize the files that will be archived into archive sets. For each archive set that you need to create, enter the directive archiveset-name
starting-directory
expression
, where:
archiveset-name
is the name that you choose for new the archive set.
starting-directory
is the path to the directory where Oracle HSM starts to search for the files that belong in the set (relative to the file-system mount point).
expression
is one of the Boolean expressions defined by the Solaris find
command.
You should keep archive set definitions as inclusive and simple as possible in most cases. But note that, when circumstances dictate, you can limit archive set membership by specifying additional, more restrictive qualifiers, such as user or group file ownership, file size, file date/time stamps, and file names (using regular expressions). See the archiver.cmd
man page for full information.
In the first example, we put all files found in the hqfs1
file system in a single archive set named allhqfs1
. We specify the path using a dot (.
) to start the search in the mount point directory itself (/hqfs1
).
... #----------------------------------------------------------------------- # Archive Set Assignments fs = hqfs1 logfile = /var/adm/hqfs1.archiver.log no_archive . -name \/hsm\/hqfs1\/data\/((tmp|bak)\/.*)|([.A-Za-z0-9_-]+\.tmp) allhqfs1 .
In the second example, we define an archive set named inactive
before we define the allhqfs1
archive set. The inactive
archive set includes all files (.
) in the hqfs1
file system that have not been accessed for at least one year (-access
1y
). The Oracle HSM archiver processes directives in the order in which the archive sets are defined. So the inactive
set definition lets us restrict the allhqfs1
set to actively used files. We can then define archiving policies that store dormant files on low-cost, long term storage media, such as the Oracle Storage Cloud, and active files on disk archives and tape:
... #----------------------------------------------------------------------- # Archive Set Assignments fs = hqfs1 logfile = /var/adm/hqfs1.archiver.log no_archive . -name \/hsm\/hqfs1\/data\/((tmp|bak)\/.*)|([.A-Za-z0-9_-]+\.tmp) inactive . -access 1y allhqfs1 .
Next, add copy directives for each archive set. For each copy, start the line with one or more spaces, and enter the directive copy-number
-release
-norelease
archive-age
unarchive-age
, where:
copy-number
is an integer.
-release
and -norelease
are optional parameters that control how disk cache space is managed once copies have been made. On its own, -release
causes the disk space to be automatically released as soon as the corresponding copy is made. On its own, -norelease
prevents release of disk space until all
copies that have -norelease
set have been made and
the releaser process has been run. Together, -release
and -norelease
automatically release disk cache space as soon as all copies that have -norelease
set have been made.
archive-age
is the time that must elapse from the time when the file was last modified before it is archived. Express time as any combination of integers and the identifiers s
(seconds), m
(minutes), h
(hours), d
(days), w
(weeks) and y
(years). The default is 4m
.
If the first copy of the archive set will be archived to a cloud service, set the archive age to 30 minutes (30m
) or more. The greater archive age allows time for changes to accumulate prior to archiving. If files are archived every time changes are made, the cloud resources accumulate large numbers of old, stale copies, increasing costs.
unarchive-age
is the time that must elapse from time when the file was last modified before it can be unarchived. The default is to never unarchive copies.
For full redundancy, always specify at least two copies of each archive set (the maximum is four).
In the example, we specify one copy for archive set inactive
and set the archive age to one year. We specify three copies for archive set allhqfs1
and set a different archive age for each. Copy allhqfs1.
1
will be made when files are 15 minutes old. Copy allhqfs1.
2
will be made files are 24 hours old. Copy allhqfs1.
3
will be made to tape media when files are 48 hours old.
... #----------------------------------------------------------------------- # Archive Set Assignments fs = hqfs1 logfile = /var/adm/hqfs1.archiver.log no_archive . -name \/hsm\/hqfs1\/data\/((tmp|bak)\/.*)|([.A-Za-z0-9_-]+\.tmp) inactive . -access 1y 1 1y allhqfs1 . 1 15m 2 24h 3 48h
Define archive sets for any remaining file systems.
In the example, we have configured an additional QFS file system, DISKVOL1
, for use as archival disk media only. We do not want to be making archival copies of archival copies. So we start an entry for fs
=
DISKVOL1
and include all files in the no_archive
set:
... #----------------------------------------------------------------------- # Archive Set Assignments fs = hqfs1 logfile = /var/adm/hqfs1.archiver.log no_archive . -name \/hsm\/hqfs1\/data\/((tmp|bak)\/.*)|([.A-Za-z0-9_-]+\.tmp) inactive . -access 1y 1 1y allhqfs1 . 1 15m 2 45m 3 24h fs = DISKVOL1 # QFS File System (Archival Media) no_archive .
Next we enter the directives that govern how copies are created. On a new line, start the copy parameters section by entering the key word params
.
...
fs = DISKVOL1 # QFS File System (Archival Media)
no_archive .
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
To set common copy parameters that apply to all copies of all archive sets, enter a line of the form allsets
parameter-list
where:
allsets
is the special archive set that represents all configured archive sets.
parameter-list
is a sequence of parameter/value pairs separated by spaces. Each pair takes the form -
parameter-name
value
.
See the see the ARCHIVE
SET
COPY
PARAMETERS
SECTION
of the archiver.cmd
(4) man page for a full list of parameter names and possible values.
The directive in the example is optimal for most file systems. The special allsets
archive set insures that all archive sets are handled uniformly, for optimal performance and ease of management. The -sort path
parameter insures that the tape archive (tar
) files for all copies of all archive sets are sorted by path, so that files in the same directories remain together on the archive media. The -offline_copy
stageahead
parameter can improve performance when archiving offline files. The -reserve
set
parameter insures that files are always copied to media dedicated for the use of each archive set:
...
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
allsets -sort path -offline_copy stageahead -reserve set
To set copy parameters specific to each copy of all archive sets, enter a line of the form allsets
.
copy-number
parameter-list
, where:
allsets
is the special archive set that represents all configured archive sets.
copy-number
is the number of the copy to which the parameters apply
parameter-list
is a sequence of parameter/value pairs separated by spaces. Each pair takes the form -
parameter-name
value
.
See the see the ARCHIVE
SET
COPY
PARAMETERS
SECTION
of the archiver.cmd
(4) man page and the "Copy Parameters" section of Appendix C for full information on copy parameters and values.
The examples combine tailor copy jobs to archiving requirements with just a few, commonly used parameters:
The -startage
and -startsize
control the start of archiving. Archiving starts when an amount of time specified by -startage
has elapsed since the earliest modification date for a file in the archive set and/or when the aggregate size of the files in the archive set exceeds the size specified by -startsize
.
The -drives
and -archmax
parameters control drive utilization. The archiver can use no more than the number of tape devices specified by -drives
and can create archive (.tar
) files no larger than the size specified by -archmax
.
In the first example, the copy parameters optimize copy allsets.1
for promptly backing up small, frequently modified user files to disk volumes. Archiving starts when the first file selected for archiving has been waiting for 15 minutes or when the total size of all waiting files is at least 500 megabytes. A maximum of 10 drives can be used to make the copy and each tar
file in the copy need be no larger than one gigabyte.
The copy parameters optimize the first tape copy, allsets.2
, for staging files that users request after the disk-cached copy has been released. Since these files are being actively used,-startage
and -startsize
insure that modified files are always archived within 24 hours or whenever 20 gigabytes of modified files have accumulated. The -drives
parameter insures that archiving does not monopolize the available devices at the expense of staging, while -archmax
limits the archive files to a size that is large enough to be efficiently written to tape but small enough to efficiently stage requested files.
The copy parameters optimize allsets.3
for data migration and disaster recovery. Oracle HSM starts archiving to the second tape copy when the first file selected for archiving has been waiting for 48 hours or when the total size of all waiting files is at least 50 gigabytes. Each tar
file in the copy can be no larger than 55 gigabytes. The larger maximum file size and greater start age increase the efficiency of writes to and reads from tape by eliminating the overhead associated with writing a larger numbers of smaller files.
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead -reserve set allsets.1 -startage 15m -startsize 500M -drives 10 -archmax 1G allsets.2 -startage 24h -startsize 20G -drives 2 -archmax 24G allsets.3 -startage 48h -startsize 50G -drives 2 -archmax 55G
In the second example, copy parameters optimize allsets.1
for fast transfer to tape, using multiple drives in parallel. The -drives
parameter makes more devices available to the archiver. But one archive (.tar
) file can be written to exactly one drive. So -archmax
specifies a smaller maximum file size to insure that the copy creates enough archive files to utilize the additional drives:
allsets.1 -startage 8h -startsize 8G -drives 6 -archmax 10G
To set copy parameters for individually specified archive sets, enter a line of the form archive-set
.
copy-number
parameter-list
, where:
archive-set
is the name of the archive set as specified in the File System Directives section of the file.
copy-number
is the number of the copy to which the parameters apply
parameter-list
is a sequence of parameter/value pairs separated by spaces. Each pair takes the form -
parameter-name
value
.
See the see the ARCHIVE
SET
COPY
PARAMETERS
SECTION
of the archiver.cmd
(4) man page for a full list of parameter names and possible values.
In the example, the directive inactive.1
specifies one copy of the dormant files in the inactive
archive set. The copy is made once the oldest unarchived in the set has not been modified for one year or once the total size of the files exceeds 250 megabytes. The dormant files will be stored remotely, in the Oracle Storage Cloud, so the parameter -drives
2
specifies the number of streams that will be sent to the cloud. The first copy directive for the allhqfs1
archive set, allhqfs1.1
, insures that frequent changes to data files are backed up promptly to archival disk. The second directive, allhqfs1.2
, optimizes the archived files for staging from tape to the disk cache. The third directive, allhqfs1.3
, optimizes the archived files for disaster recovery or migration to new media:
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead -reserve set inactive.1 -startage 1y -startsize 250M -drives 2 -archmax 1G allhqfs1.1 -startage 10m -startsize 500M -drives 10 -archmax 1G allhqfs1.2 -startage 24h -startsize 20G -drives 2 -archmax 24G allhqfs1.3 -startage 48h -startsize 50G -drives 2 -archmax 55G
When you have set all required copy parameters, close the copy parameters list by entering the endparams
keyword on a new line:
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
...
#-----------------------------------------------------------------------
# Copy Parameter Directives
params
inactive.1 -startage 1y -startsize 250m -drives 2 -archmax 1G
allsets -sort path -offline_copy stageahead -reserve set
allhqfs1.1 -startage 10m -startsize 500M -drives 10 -archmax 1G
allhqfs1.2 -startage 24h -startsize 20G -drives 2 -archmax 24G
allhqfs1.3 -startage 48h -startsize 50G -drives 2 -archmax 48G
endparams
Optionally, you can define media pools by entering the vsnpools
keyword, one or more directives of the form pool-name
media-type
volumes
, where:
pool-name
is the name that you have assigned to the pool.
media-type
is one of the media type codes defined in Appendix A, "Glossary of Equipment Types".
volumes
is a list of volume serial numbers (VSNs) or a regular expression that matches one or more volume serial numbers.
You can substitute the name of the media pool for a range of VSNs when assigning media for a copy. If you define media pools, avoid excessively restricting the media available to the archiving process.
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd ... #----------------------------------------------------------------------- # VSN Pool Definitions vsnpools pool1 ti TP9[0-2][0-9][0-9] TP9[5-6][0-9][0-9] pool2 ti TP9[3-4][0-9][0-9] TP9[7-8][0-9][0-9]
Close the VSN Pool Definitions list with the endvsnpools
keyword.
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
...
#-----------------------------------------------------------------------
# VSN Pool Definitions
vsnpools
pool1 li VOL90[0-4]
pool2 li VOL90[5-9]
endvsnpools
Next, start identifying the archival media that your archive set copies should use. On a new line, enter the keyword vsns
:
...
#-----------------------------------------------------------------------
# VSN Directives
vsns
Specify media for each archive-set copy by entering a line of the form archive-set-name
.
copy-number
media-type
volumes
, where:
archive-set-name
.
copy-number
specifies the archive set and copy to which the directive applies.
media-type
is one of the media type codes defined in Appendix A, "Glossary of Equipment Types"
volumes
is a regular expression that matches one or more volume serial numbers (VSNs).
For full redundancy, always assign each archive set copy to a different range of media, so that both copies never reside on the same physical volume. If possible, always assign at least one copy to removable media, such as tape.
In the example, we archive files in the inactive
archive set to Oracle Storage Cloud volumes (type cl
) that have volume serial numbers starting with cl800
, the name
prefix that we assigned when setting up the cloud-storage parameters file. We send the first copy of the files in the allhqfs1
archive set to archival disk volumes (type dk
) that have serial numbers in the range DISKVOL1
to DISKVOL15
. We send the second copy of the files in allhqfs1
to tape volumes (type tp
) that have volume serial numbers in the range VOL000
to VOL399
. We send the third copy to tape volumes that have volume serial numbers in the range VOL400
to VOL899
:
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead -reserve set allhqfs1.1 -startage 10m -startsize 500M -drives 10 -archmax 1G allhqfs1.2 -startage 24h -startsize 20G -drives 2 -archmax 24G allhqfs1.3 -startage 48h -startsize 50G -drives 2 -archmax 55G endparams #----------------------------------------------------------------------- # VSN Directives vsns inactive.1 cl cl800.* allhqfs1.1 dk DISKVOL[1-15] allhqfs1.2 tp VOL[0-3][0-9][0-9] allhqfs1.3 tp VOL[4-8][0-9][0-9]
When you have specified media for all archive-set copies, close the vsns
directives list by entering the endvsns
keyword on a new line. Save the file and close the editor.
... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead -reserve set allfiles.1 -startage 10m -startsize 500M -drives 10 -archmax 1G allfiles.2 -startage 24h -startsize 20G -drives 2 -archmax 24G allfiles.3 -startage 48h -startsize 50G -drives 2 -archmax 55G endparams #----------------------------------------------------------------------- # VSN Directives vsns inactive.1 cl cl800.* allhqfs1.1 dk DISKVOL[1-15] allhqfs1.2 tp VOL[0-3][0-9][0-9] allhqfs1.3 tp VOL[4-8][0-9][0-9] endvsns :wq root@mds1:~#
Check the archiver.cmd
file for errors. Use the command archiver -lv
.
The archiver -lv
command prints the archiver.cmd
file to screen and generates a configuration report if no errors are found. Otherwise, it notes any errors and stops. In the example, we have an error:
root@mds1:~# archiver -lv Reading '/etc/opt/SUNWsamfs/archiver.cmd'. ... 13: # File System Directives 14: # 15: fs = hqfs1 16: logfile = /var/adm/hqfs1.archiver.log 17: all . 18: 1 -norelease 15m 19: 2 -norelease 15m 20: fs=DISKVOL1 # QFS File System (Archival Media) 21: ... 42: endvsns DISKVOL1.1 has no volumes defined 1 archive set has no volumes defined root@mds1:~#
If errors were found in the archiver.cmd
file, correct them, and then re-check the file.
In the example above, we forgot to enter the no_archive
directive to the file-system directives DISKVOL1
, the QFS file system that we configured as a disk archive. When we correct the omission, archiver
-lv
runs without errors:
root@mds1:~# archiver -lv
Reading '/etc/opt/SUNWsamfs/archiver.cmd'.
...
20: fs=DISKVOL1 # QFS File System (Archival Media)
21: no_archive .
...
42: endvsns
Notify file: /etc/opt/SUNWsamfs/scripts/archiver.sh
...
allfiles.1
startage: 10m startsize: 500M drives 10: archmax: 1G
Volumes:
DISKVOL1 (/diskvols/DISKVOL15)
...
DISKVOL15 (/diskvols/DISKVOL3)
Total space available: 150T
allfiles.2
startage: 24h startsize: 20G drives: 2 archmax: 24G reserve: set
Volumes:
VOL000
...
VOL199
Total space available: 300T
allfiles.3
startage: 48h startsize: 20G drives: 2 archmax: 24G reserve: set
Volumes:
VOL200
...
VOL399
Total space available: 300T
root@mds1:~#
Tell the Oracle HSM software to reread the archiver.cmd
file and reconfigure itself accordingly. Use the samd
config
command.
root@mds1:~# /opt/SUNWsamfs/sbin/samd config Configuring SAM-FS root@mds1:~#
Open the /etc/opt/SUNWsamfs/releaser.cmd
file in a text editor, add the line list_size = 300000
, save the file, and close the editor.
The list_size
directive sets the number of files that can be released from a file system at one time to an integer in the range [10-2147483648
]. If there is enough space in the .inodes
file for one million inodes (allowing 512- bytes per inode), the default value is 100000
. Otherwise the default is 30000
. Increasing this number to 300000
better suits typical file systems that contain significant numbers of small files.
In the example, we use the vi
editor:
root@mds1:~# vi /etc/opt/SUNWsamfs/releaser.cmd # releaser.cmd logfile = /var/opt/SUNWsamfs/releaser.log list_size = 300000 :wq root@mds1:~#
Open the /etc/opt/SUNWsamfs/stager.cmd
file in a text editor, and add the line maxactive =
stage-requests
, where stage-requests
is 500000
on hosts that have 8 gigabytes of RAM or more and 100000
on hosts that have less than 8 gigabytes. Save the file, and close the editor.
The maxactive
directive sets the maximum number of stage requests that can be active at one time to an integer in the range [1-500000
]. The default is to allow 5000 stage requests per gigabyte of host memory.
In the example, we use the vi
editor:
root@mds1:~# vi /etc/opt/SUNWsamfs/stager.cmd # stager.cmd logfile = /var/opt/SUNWsamfs/stager.log maxactive = 300000 :wq root@mds1:~#
Recycling is not enabled by default. So, if you require recycling of removable media volumes, go to "Configuring the Recycling Process".
If the mcf
file for the archiving Oracle HSM file system includes a network-attached tape library in the archiving equipment section, go to "Catalog Archival Media Stored in a Network-Attached Tape Library".
If you need to be able to verify the data integrity of archival tape volumes, go to "Configure Archival Media Validation".
Otherwise, "Configure File System Protection".
When removable media volumes contain fewer than a user-specified number of valid archive sets, the recycler consolidates the valid data on other volumes so that the original volumes can be exported for long-term storage or relabeled for reuse. You can configure recycling in either of two ways:
You can configure recycling by archive set.
When you recycle media by archive set, you add recycling directives to the archiver.cmd
file. You can specify exactly how media in each archive set copy is recycled. Recycling criteria are more narrowly applied, since only members of the archive set are considered.
Where possible, recycle media by archive sets. In an Oracle HSM archiving file system, recycling is logically part of file-system operation rather than library management. Recycling complements archiving, releasing, and staging. So it makes sense to configure it as part of the archiving process.
Note that you must configure recycling by archive sets if your configuration includes disk-archive volumes and/or SAM-Remote.
You can configure recycling by library.
Recycling by library makes most sense when the logistical aspects of accessing the storage make managing the library as a unit desirable. Recycling cloud volumes is a good example.
When you recycle media by library, you add recycling directives to a recycler.cmd
file. You can thus set common recycling parameters for all media contained in a specified library. Recycling directives apply to all volumes in the library, so they are inherently less granular than archive set-specific directives. You can explicitly exclude specified volume serial numbers (VSNs) from examination. But otherwise, the recycling process simply looks for volumes that contain anything that it does not recognize as a currently valid archive file.
As a result, recycling by library can destroy files that are not part of the file system that is being recycled. If a recycling directive does not explicitly exclude them, useful data, such as back up copies of archive logs and library catalogs or archival media from other file systems, may be at risk.
For this reason, you cannot recycle by library if you are using SAM-Remote. Volumes in a library controlled by a SAM-Remote server contain foreign archive files that are owned by clients rather than by the server.
Log in to the Oracle HSM file-system host as root
.
root@mds1:~#
Open the /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor, and scroll down to the copy params
section.
In the example, we use the vi
editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd ... #----------------------------------------------------------------------- # Copy Parameter Directives params allsets -sort path -offline_copy stageahead allfiles.1 -startage 6h -startsize 6G -startcount 500000 allfiles.2 -startage 24h -startsize 20G -startcount 500000 -drives 5
In the params
section of the archiver.cmd
file, enter your recycler directives by archive set, in the form archive-set
directive-list
, where archive-set is one of the archive sets and directive-list
is a space-delimited list of directive name/value pairs (for a list of recycling directives, see the archiver.cmd
man page). Then save the file and close the editor.
In the example, we add recycling directives for archive sets allfiles.1
and allfiles.2
. The -recycle_mingain
30
and -recycle_mingain
90
directives do not recycle volumes unless, respectively, at least 30 percent and 90 percent of the volume's capacity can be recovered. The -recycle_hwm
60
directive starts recycling when 60 percent of the removable media capacity has been used.
root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd ... #----------------------------------------------------------------------- # Copy Parameters Directives params allsets -sort path -offline_copy stageahead allfiles.1 -startage 6h -startsize 6G -startcount 500000 allfiles.1 -recycle_mingain 30 -recycle_hwm 60 allfiles.2 -startage 6h -startsize 6G -startcount 500000 allfiles.2 -recycle_mingain 90 -recycle_hwm 60 endparams #----------------------------------------------------------------------- # VSN Directives vsns allfiles.1 dk DISKVOL1 allfiles.2 tp VOL0[0-1][0-9] endvsns :wq [root@mds1:~#
Check the archiver.cmd
file for errors. Use the command archiver -lv
.
The command archiver -lv
reads the archiver.cmd
and generates a configuration report if no errors are found. Otherwise, it notes any errors and stops. In the example, the file does not contain any errors:
root@mds1:~# archiver -lv
Reading '/etc/opt/SUNWsamfs/archiver.cmd'.
...
VOL399
Total space available: 300T
root@mds1:~#
If errors were found in the archiver.cmd
file, correct them, and then re-check the file.
Create the recycler.cmd
file in a text editor. Specify a path and file name for the recycler log. Then save the file and close the editor.
Configure Oracle HSM to write logs to a non-Oracle HSM directory, such as /var/adm/
. In the example, we use the vi
editor, and specify /var/adm/recycler.log
:
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd logfile = /var/adm/recycler.log :wq root@mds1:~#
Next, customize the /etc/opt/SUNWsamfs/scripts/recycler.sh
script to correctly handle recycled volumes.
Log in to the Oracle HSM file-system host as root
.
root@mds1:~#
Create the /etc/opt/SUNWsamfs/recycler.cmd
file in a text editor.
In the example, we use the vi
editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd # Configuration file for archiving file systems #-----------------------------------------------------------------------
Specify a path and file name for the recycler log using the logfile
directive.
Configure Oracle HSM to write logs to a non-Oracle HSM directory, such as /var/adm/
. In the example, we specify /var/adm/recycler.log
:
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
If there are any volumes in the archival media library that must not be recycled, enter the directive no_recycle
media-type
volumes
, where media-type
is one of the media type codes defined in Appendix A, "Glossary of Equipment Types" and volumes
is a regular expression that matches one or more volume serial numbers (VSNs).
In the example, we disable recycling for volumes in the range [VOL020-VOL999
]:
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
no_recycle tp VOL[0-9][2-9][0-9]
On a new line, enter the directive library
parameters
, where library
is the family set name that the /etc/opt/SUNWsamfs/mcf
file assigns to a removable media library and where parameters
is a space-delimited list of parameter/value pairs drawn from the following list:
-dataquantity
size
sets the maximum amount of data that can be scheduled for rearchiving at one time to size
, where size
is a number of bytes. The default is 1 gigabyte.
-hwm
percent
sets the library's high-water mark
, the percentage of the total media capacity that, when used, triggers recycling. The high-water mark is specified as percent
, a number in the range [0-100
]. The default is 95
.
-ignore
prevents recycling for this library, so that you can test the recycler.cmd
file non-destructively.
-mail
address
sends recycling messages to address
, where address
is a valid email address. By default, no messages are sent.
-mingain
percent
limits recycling to volumes that can increase their available free space by at least a minimum amount, expressed as a percentage of total capacity. This minimum gain is specified as percent
, a number in the range [0-100
]. The defaults are 60
for volumes with a total capacity under 200 gigabytes and 90
for capacities of 200 gigabytes or more.
-vsncount
count
sets the maximum number of volumes that can be scheduled for rearchiving at one time to count
. The default is 1
.
In the example, we set the high-water mark for library library1
to 95% and require a minimum capacity gain per cartridge of 60%:
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
no_recycle tp VOL[0-9][2-9][0-9]
library1 -hwm 95 -mingain 60
Repeat the preceding step for any other libraries that are part of the Oracle HSM configuration.
In the example, we configure recycling of volumes in the cloud-storage library cl800
. When recycling cloud-resident volumes, we want to avoid the additional overhead and increased cost of moving still-active files from recycling candidates to other volumes. We want to recycle volumes that can be immediately relabeled and reused, without rearchiving any files. So the directive recycles the first 100 VSNs in the library (-vsncount
100
) that hold no active data. The -dataquantity
parameter recycles volumes that hold no more than one byte of data, and the -hwm
and -mingain
parameters let recycling proceed regardless of the percentage of capacity currently used or to be gained. The recycler log lists cartridges that meet these criteria as no-data VSN
volumes.
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd # Configuration file for archiving file systems #----------------------------------------------------------------------- logfile = /var/adm/recycler.log no_recycle tp VOL[0-9][2-9][0-9] library1 -hwm 95 -mingain 60 cl800 -vsncount 100 -dataquantity 1b -hwm 1 -mingain 1 root@mds1:~#
Then save the recycler.cmd
file, and close the editor.
root@mds1:~# vi /etc/opt/SUNWsamfs/recycler.cmd
# Configuration file for archiving file systems
#-----------------------------------------------------------------------
logfile = /var/adm/recycler.log
no_recycle tp VOL[0-9][2-9][0-9]
library1 -hwm 95 -mingain 60
cl800 -vsncount 100 -dataquantity 1b -hwm 1 -mingain 1
:wq
root@mds1:~#
Next, customize the /etc/opt/SUNWsamfs/scripts/recycler.sh
script to correctly handle the recycled volumes.
recycler.sh
Script to Handle Recycled Media Per RequirementsWhen the recycling process identifies a removable media volume that has been drained of valid archive copies, it calls the recycler.sh
file, a C-shell script designed to handle disposition of recycled media. You edit this script to perform the tasks that you need, from notifying administrators that volumes are ready for recycling to relabeling the volumes for reuse or exporting them from the library.
By default, the script reminds the root
user to set up the script.
Log in to the Oracle HSM file-system host as root
.
root@mds1:~#
Open the file /etc/opt/SUNWsamfs/scripts/recycler.sh
in a text editor.
In the example, we use the vi
editor:
root@mds1:~# vi /etc/opt/SUNWsamfs/scripts/recycler.sh #!/bin/csh -f # SAM-QFS_notice_begin ... # SAM-QFS_notice_end
Read and abide by the terms set in the license text at the head of the file.
Enable logging by uncommenting the line indicated in the script comments. If necessary, specify an alternate path for the file.
root@mds1:~# vi /etc/opt/SUNWsamfs/scripts/recycler.sh ... # It is a good idea to log the calls to this script echo `date` $* >> /var/opt/SUNWsamfs/recycler.sh.log # As an example, if uncommented, the following lines will relabel the VSN, # if it exists in a physical library. If the VSN is in the historian
To relabel recycled volumes that are resident in the library and notify root of any recycled, off-site volumes, uncomment the lines indicated in the script comments. Make changes as needed to suit local requirements.
# As an example, if uncommented, the following lines will relabel the VSN, # if it exists in a physical library. If the VSN is in the historian # catalog (e.g., it's been exported from a physical library and moved # to off-site storage), then email is sent to "root" informing that the # medium is ready to be returned to the site and reused. # set stat=0 if ( $6 != hy ) then /opt/SUNWsamfs/sbin/chmed -R $5.$2 /opt/SUNWsamfs/sbin/chmed -W $5.$2 if ( $1 != "od" ) then /opt/SUNWsamfs/sbin/${1}label -w -vsn $2 -old $2 $4\:$3 if ( $status != 0 ) then set stat = 1 endif else /opt/SUNWsamfs/sbin/${1}label -w -vsn $2 -old $2 $4\:$3\:$7 if ( $status != 0 ) then set stat = 1 endif endif else mail root <</eof VSN $2 of type $5 is devoid of active archive images. It is currently in the historian catalog, which indicates that it has been exported ... /eof endif echo `date` $* done >> /var/opt/SUNWsamfs/recycler.sh.log if ( $stat != 0 ) then exit 1 else exit 0 endif # These lines would inform "root" that the VSN should be removed
If you wish to export volumes that contain no current data files for long-term off-site storage, uncomment the lines indicated in the script comments. Make changes as needed to suit local requirements.
# These lines would inform "root" that the VSN should be removed # from the robotic library: mail root <</eof VSN $2 in library $4 is ready to be shelved off-site. /eof echo `date` $* done >> /var/opt/SUNWsamfs/recycler.sh.log exit 0 # The default action is to mail a message reminding you to set up this
Once you have edited the script to handle recycled cartridges, comment out the lines that generate the default reminder message.
# The default action is to mail a message reminding you to set up this # file. You should comment out these lines (through and including the /eof # below) after you've set up this file. #/bin/ppriv -s I=basic -e /usr/bin/mailx -s "Robot $6 ... recycle." root <</eof #The /etc/opt/SUNWsamfs/scripts/recycler.sh script was called by the Oracle HSM #recycler with the following arguments: # # Media type: $5($1) VSN: $2 Slot: $3 Eq: $4 # Library: $6 # #/etc/opt/SUNWsamfs/scripts/recycler.sh is a script which is called when the #recycler determines that a VSN has been drained of all known active archive ... #/eof ##echo `date` $* done >> /var/opt/SUNWsamfs/recycler.sh.log exit 0
When you are finished editing the file, save your changes and close the editor.
#/eof
##echo `date` $* done >> /var/opt/SUNWsamfs/recycler.sh.log
exit 0
:wq
root@mds1:~#
If the mcf
file for the archiving Oracle HSM file system includes a network-attached tape library in the archiving equipment section, catalog the archival media in the library.
Otherwise, backup your configuration and configure file-system protection.
After you mount a file system, the Oracle HSM software creates catalogs for each automated library that is configured in the mcf
file. However, if you have a network-attached library, you have to take some additional steps to populate the its catalog.
Proceed as follows:
Log in to the file-system host as root
.
root@mds1:~#
If the archiving file system uses an Oracle StorageTek ACSLS-attached tape library, draw the required Oracle HSM archival media from the library's scratch pool and generate the catalog automatically. Use the command samimport
-c
volumes
-s
pool
, where:
volumes
is the number of volumes needed.
pool
is the name of the scratch media pool defined for the library
In the example, we request 20
tape volumes drawn from the pool called scratch
:
root@mds1:~# samimport -c 20 -s scratch
Once samimport
has cataloged the media in an Oracle StorageTek ACSLS-attached tape library, you are ready to configure file system protection.
If the archiving file system uses an IBM 3494 library configured as a single, unshared logical library, place the required tape volumes in the library mail slot, and let the library catalog them automatically.
If the Additional
Parameters
field of the mcf
file specifies access=private
, is configured as a single logical library.
Once an IBM 3494 library has automatically cataloged the media, you are ready to configure file system protection.
Otherwise, if the archiving file system uses either of the following, create a catalog input file using a text editor:
an IBM 3494 library configured as a shared library
If the Additional
Parameters
field of the mcf
file specifies access=shared
, the IBM 3494 library is divided into multiple logical libraries.
any other network-attached library.
In the example, we use the vi
editor to create a catalog input file, input3494cat
, for a shared IBM 3494 library:
root@mds1:~# vi input3494cat ~ "~/input3494cat" [New File]
Start a record by entering the record index
. Always enter 0
(zero) for the first record, then increment the index for each succeeding record. Enter a space to indicate the end of the field.
Rows define records and spaces delimit fields in build_cat
input files. The value of the first field, the index
, is simply a consecutive integer starting from 0
that identifies the record within the Oracle HSM catalog. In the example, this is the first record, so we enter 0
:
0
~
"~/input3494cat" [New File]
In the second field of the record, enter the volume serial number (VSN) of the tape volume or, if there is no VSN, a single ?
(question mark). Then enter a space to indicate the end of the field.
Enclose values that contain white-space characters (if any) in double quotation marks: "VOL 0A"
. In this example, the VSN of the first volume does not contain spaces:
0 VOL001
~
"~/input3494" [New File]
In the third field, enter the barcode of the volume (if different from the volume serial number), the volume serial number, or, if there is no volume serial number, the string NO_BAR_CODE
. Then enter a space to indicate the end of the field.
In the example, the barcode of the first volume has the same value as the VSN:
0 VOL001 VOL001
~
"~/input3494cat" [New File]
Finally, in the fourth field, enter the media type of the volume. Then enter a space to indicate the end of the field.
The media type is a two-letter code, such as li
for LTO media (see Appendix A, "Glossary of Equipment Types", for a comprehensive listing of media equipment types). In the example, we are using an IBM 3494 network-attached tape library with LTO tape drives, so we enter li
(including the terminating space):
0 VOL001 VOL001 li
~
"~/input3494cat" [New File]
Repeat steps 3-6 to create additional records for each of the volumes that you intend to use with Oracle HSM. Then save the file.
0 VOL001 VOL001 li 1 VOL002 VOL002 li ... 13 VOL014 VOL014 li :wq root@mds1:~#
Create the catalog with the build_cat
input-file
catalog-file
command, where input-file
is the name of your input file and catalog-file
is the full path to the library catalog.
If you have specified a catalog name in the Additional
Parameters
field of the mcf
file, use that name. Otherwise, if you do not create catalogs, the Oracle HSM software creates default catalogs in the /var/opt/SUNWsamfs/catalog/
directory using the file name family-set-name
, where family-set-name
is equipment name that you use for the library in mcf
file. In the example, we use the family set i3494
:
root@mds1:~# build_cat input_vsns /var/opt/SUNWsamfs/catalog/i3494
If the archiving file system is shared, repeat the preceding step on each potential metadata server.
The archiving file system is now complete and ready for use.
To protect a file system, you need to do two things:
You must protect the files that hold your data.
You must protect the file system itself, so that you can use, organize, locate, access, and manage your data.
In an Oracle HSM archiving file system, file data is automatically protected by the archiver: modified files are automatically copied to archival storage media, such as tape. But if you backed up only your files and then suffered an unrecoverable failure in a disk device or RAID group, you would have the data but no easy way to use it. You would have to create a substitute file system, identify each file, determine its proper location within the new file system, ingest it, and recreate lost relationships between it and users, applications, and other files. This kind of recovery is, at best, a daunting and long drawn-out process.
So, for fast, efficient recovery, you have to actively protect the file-system metadata as well. When you back up the metadata, you back up directory paths, inodes, access controls, symbolic links, and the pointers that tie files to archival copies stored on removable media. So, to recover the file system, you simply have to restore the metadata. When a user subsequently requests a path and file, the file system will use the metadata to find the archive copy and automatically stage the corresponding data to disk.
You protect Oracle HSM file-system metadata by scheduling recovery points and saving archive logs. A recovery point is a compressed file that stores a point-in-time backup copy of the metadata for an Oracle HSM file system. In the event of a data loss—anything from accidental deletion of a user file to catastrophic loss of a whole file system—you can recover to the last known-good state of the file or file system almost immediately by locating the last recovery point at which the file or file system remained intact. You then restore the metadata recorded at that time and either stage the files indicated in the metadata to the disk cache from archival media or, preferably, let the file system stage files on demand, as users and applications access them.
Like any point-in-time backup copy, a recovery point is seldom a complete record of the state of the file system at the time when a failure occurs. Inevitably, at least a few files are created and changed after one recovery point is completed and before the next one is created. You can—and should—minimize this problem by scheduling creation of recovery points frequently and at times when the file system is not in use. But, in practice, scheduling has to be a compromise, because the file system exists to be used.
For this reason, you must also save point-in-time copies of the archiver log file. As each data file is archived, the log file records the volume serial number of the archival media, the archive set and copy number, the position of the archive (tar
) file on the media, and the path to and name of the data file within the tar
file. With this information, you can recover any files that are missing from the recovery point using Solaris or Oracle HSM tar
utilities. However, this information is inactive. Like most system logs, the archiver log grows rapidly and must thus be overwritten frequently. If you do not make regular copies to compliment your recovery points, you will not have log information when you need it.
File system protection thus requires some planning. On the one hand, you want to create recovery points and log-file copies frequently enough and retain them long enough to give you the best chance of recovering lost or damaged files and file systems. On the other hand, you do not want to create recovery points and log-file copies while data files are actively changing and you need to be cognizant of the disk space that they consume (recovery point files and logs can be large). Accordingly, this section recommends a broadly applicable configuration that can be used with many file system configurations without modification. When changes are necessary, the recommended configuration illustrates the issues and serves as a good starting point.
To create and manage recovery points, carry out the following tasks:
Create locations for storing recovery point files and copies of the archiver log.
Automatically create recovery points and save archiver logs.
For each archiving file system that you have configured, proceed as follows:
Log in to the file-system host as root
.
root@mds1:~#
Select a storage location for the recovery point files. Select an independent file system that can be mounted on the file system host.
Make sure that the selected file system has enough space to store both new recovery point files and the number of recovery point files that you plan to retain at any given time.
Recovery point files can be large and you will have to store a number of them, depending on how often you create them and how long you retain them.
Make sure that the selected file system does not share any physical devices with the archiving file system.
Do not store recovery point files in the file system that they are meant to protect. Do not store recovery point files on logical devices, such as partitions or LUNs, that reside on physical devices that also host the archiving file-system.
In the selected file system, create a directory to hold recovery point files. Use the command mkdir
mount-point
/
path
, where mount-point
is the mount point for the selected independent file system and path
is the path and name of the chosen directory.
Do not store recovery point files for several archiving file systems in a single, catch-all directory. Create a separate directory for each, so that recovery point files are organized and easily located when needed.
In the example, we are configuring recovery points for the archiving file system /hqfs1
. So we have created the directory /zfs1/hqfs1_recovery
on the independent file system /zfs1
:
root@mds1:~# mkdir /zfs1/hqfs1_recovery
If a file system does not share any physical devices with the archiving file system, create a subdirectory for storing point-in-time copies of the archiver log(s) for your file system(s).
In the example, we choose to store log copies in the /var
directory of the host's root file system. We are configuring file system protection for the archiving file system /hqfs1
. So we create the directory /var/hqfs1_archlogs
:
root@mds1:~# mkdir /var/hqfs1_archlogs
Next, automate creation of recovery points and saving of archiver logs.
While you can create metadata recovery point files automatically, either by creating entries in the crontab
file or by using the scheduling feature of the Oracle HSM Manager graphical user interface, the latter method does not automatically save archiver log data. So this section focuses on the crontab
approach. If you wish to use the graphical user interface to schedule recovery points, refer to the Manager online help.
The procedure below creates two crontab
entries that run daily: one that deletes out-of-date recovery point files and then creates a new recovery point and one that saves the archiver log. For each archiving file system that you have configured, proceed as follows:
Log in to the file-system host as root
.
root@mds1:~#
Open the root
user's crontab
file for editing. Use the command crontab
-e
.
The crontab
command opens an editable copy of the root
user's crontab
file in the text editor specified by the EDITOR
environment variable (for full details, see the Solaris crontab
man page). In the example, we use the vi
editor:
root@mds1:~# crontab -e ... # The root crontab should be used to perform accounting data collection. 10 3 * * * /usr/sbin/logadm 15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind 30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean 30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
First, create the entry that deletes out-of-date recovery point files and creates a new recovery point. On a new line, specify the time of day when the work will be done. Enter minutes
hour
* * *
, where:
minutes
is an integer in the range [0-59
] that species the minute when the job starts.
hour
is an integer in the range [0-23
] that species the hour when the job starts.
*
(asterisk) specifies unused values.
For a task that runs daily, the values for day of the month [1-31
], month [1-12
], and day of the week [0-6
] are unused.
Spaces separate the fields in the time specification.
minutes
hour
specify a time when files are not being created or modified.
Creating a recovery point file when file-system activity is minimal insures that the file reflects the state of the archive as accurately and completely as possible. Ideally, all new and altered files will have been archived before the time you specify.
In the example, we schedule work to begin at 2:10 AM every day:
...
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * *
Continuing on the same line, enter the shell commands that clean up the old recovery point files. Enter the text ( find
directory
-type f
-mtime +
retention
-print | xargs -l1
rm -f;
, where:
(
(opening parenthesis) marks the start of the command sequence that the crontab
entry will execute.
directory
is the path and directory name of the directory where recovery point files are stored and thus the point where we want the Solaris find
command to start its search.
-type f
is the find
command option that specifies plain files (as opposed to block special files, character special files, directories, pipes, etc).
-mtime +
retention
is the find
command option that specifies files that have not been modified for more than retention
, an integer representing the number of hours that recovery point files are retained.
-print
is the find
command option that lists all files found to standard output.
|xargs -l1 rm -f
pipes the output from -print
to the Solaris command xargs -l1
, which sends one line at a time as arguments to the Solaris command rm -f
, which in turn deletes each file found.
;
(semicolon) marks the end of the command line.
In the example, the crontab
entry searches the directory /zfs1/hqfs1_recovery
for any files that have not been modified for 72 hours (3 days) or more and deletes any it finds. Note that the crontab
entry continues on the same line but wraps around the display area:
# The root crontab should be used to perform accounting data collection. 10 3 * * * /usr/sbin/logadm 15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind 30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean 30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh 10 2 * * * ( find /zfs1/hqfs1_recovery -type f -mtime +72 -print | xargs -l1 rm -f;
Continuing on the same line, enter the shell command that changes to the directory where the recovery point is to be created. Enter the text cd
mount-point
;
, where mount-point
is the root directory of the archiving file system and the semicolon (;) marks the end of the command line.
The command that creates recovery point files, samfsdump
, backs up the metadata for all files in the current directory and in all subdirectories. In the example, we change to the /hqfs1
directory, the mount point for the file system that we are protecting. Note that the crontab
entry continues on the same line but wraps around the display area:
# The root crontab should be used to perform accounting data collection.
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /zfs1/hqfs1_recovery -type f -mtime +72 -print | xargs -l1 rm -f; cd /hqfs1;
Continuing on the same line, enter the shell commands that create the new daily recovery point. Enter the text /opt/SUNWsamfs/sbin/samfsdump
-f
directory
/'date +\%y\%m\%d')
, where:
/opt/SUNWsamfs/sbin/samfsdump
is the command that creates recovery points (see the man page for full details).
-f
is the samfsdump
command option that specifies the location where the recovery point file will be saved.
directory
is the directory that we created to hold recovery points for this file system.
'date +\%y\%m\%d'
is the Solaris date
command plus a formatting template that creates a name for the recovery point file: YYMMDD
, where YYMMDD
is the last two digits of the current year, the two-digit number of the current month, and the two-digit day of the month (for example, 150122
, January 22, 2015).
; (semicolon) marks the end of the command line.
)
(closing parenthesis) marks the end of the command sequence that the crontab
entry will execute.
In the example, we specify the recovery-point directory that we created above, /zfs1/hqfs1_recovery
. Note that the crontab
entry continues on the same line but wraps around the display area:
# The root crontab should be used to perform accounting data collection. 10 3 * * * /usr/sbin/logadm 15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind 30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean 30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh 10 2 * * * ( find /zfs1/hqfs1_recovery -type f -mtime +72 -print | xargs -l1 rm -f; cd /hqfs1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/hqfs1_recovery/'date +\%y\%m\%d')
Now create the entry that saves the archiver log. On a new line, specify the time of day when the work will be done by entering minutes
hour
* * *
, where:
minutes
is an integer in the range [0-59
] that species the minute when the job starts.
hour
is an integer in the range [0-23
] that species the hour when the job starts.
*
(asterisk) specifies unused values.
For a task that runs daily, the values for day of the month [1-31
], month [1-12
], and day of the week [0-6
] are unused.
Spaces separate the fields in the time specification.
minutes
hour
specify a time when files are not being created or modified.
In the example, we schedule work to begin at 3:15 AM every Sunday:
...
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /zfs1/hqfs1_recovery -type f -mtime +72 -print | \
xargs -l1 rm -f; cd /hqfs1 ; /opt/SUNWsamfs/sbin/samfsdump \
-f /zfs1/hqfs1_recovery/'date +\%y\%m\%d')
15 3 * * 0
Continuing on the same line, enter a shell command that moves the current archiver log to a backup location and gives it a unique name. Enter the text ( mv /var/adm/hqfs1.archive.log /var/hqfs1_archlogs/"date +%y%m%d";
.
This step saves log entries that would be overwritten if left in the active log file. In the example, we move the archiver log for the hqfs1
file system to our chosen location, /var/hqfs1_archlogs/
, and rename it YYMMDD
, where YYMMDD
is the last two digits of the current year, the two-digit number of the current month, and the two-digit day of the month (for example, 150122
, January 22, 2015):
...
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /hqfs1_recovery/dumps -type f -mtime +72 -print | xargs -l1 rm -f; \ cd /hqfs1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/hqfs1_recovery/'date +\%y\%m\%d')
15 3 * * 0 ( mv /var/adm/hqfs1.archiver.log /var/hqfs1_archlogs/"date +%y%m%d";
Continuing on the same line, enter a shell command to reinitialize the archiver log file. Enter the text touch /var/adm/hqfs1.archive.log )
.
In the example, note that the crontab
entry continues on the same line but wraps around the display area:
...
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /hqfs1_recovery/dumps -type f -mtime +72 -print | xargs -l1 rm -f; \ cd /hqfs1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/hqfs1_recovery/'date +\%y\%m\%d')
15 3 * * 0 ( mv /var/adm/hqfs1.archive.log /var/hqfs1_archlogs/"date +%y%m%d"; touch /var/adm/hqfs1.archiver.log )
Save the file, and close the editor.
# The root crontab should be used to perform accounting data collection.
10 3 * * * /usr/sbin/logadm
15 3 * * 0 [ -x /usr/lib/fs/nfs/nfsfind ] && /usr/lib/fs/nfs/nfsfind
30 3 * * * [ -x /usr/lib/gss/gsscred_clean ] && /usr/lib/gss/gsscred_clean
30 0,9,12,18,21 * * * /usr/lib/update-manager/update-refresh.sh
10 2 * * * ( find /hqfs1_recovery/dumps -type f -mtime +72 -print | xargs -l1 rm -f; \ cd /hqfs1 ; /opt/SUNWsamfs/sbin/samfsdump -f /zfs1/hqfs1_recovery/'date +\%y\%m\%d')
15 3 * * 0 ( mv /var/adm/hqfs1.archive.log /var/hqfs1_archlogs/"date +%y%m%d"; touch /var/adm/hqfs1.archive.log )
:wq
root@mds1:~#
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you need to be able to verify the data integrity of archival tape volumes, configure archival media validation.
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Media validation is a technique that evaluates the data integrity of tape media using SCSI verify
commands. The SCSI driver on the host calculates a CRC checksum for the logical blocks of data that it writes to the drive and sends a verify
command. The drive reads the data blocks, calculates its own checksum, and compares the result with the value supplied by the driver. It returns an error if there is a discrepancy. The drive discards the data it reads as soon as the checksum is complete, so there is no additional I/O-related overhead on the host.
Oracle HSM supports media validation in two ways:
You can configure Oracle HSM to support Data Integrity Validation (DIV) to validate data on StorageTek T10000 tape media, either manually or automatically under Oracle HSM Periodic Media Verification.
You can also configure Oracle HSM Periodic Media Verification to automatically validate data on both StorageTek T10000 tape media and other formats, such as LTO Ultrium.
Data Integrity Validation (DIV) is a feature of Oracle StorageTek tape drives that works with the Oracle HSM software to insure the integrity of stored data. When the feature is enabled (div = on
or div = verify
), both the server host and the drive calculate and compare checksums during I/O. During write operations, the server calculates a four-byte checksum for each data block and passes the checksum to the drive along with the data. The tape drive then recalculates the checksum and compares the result to the value supplied by the server. If the values agree, the drive writes both the data block and the checksum to tape. During read operations, both the drive and the host read a data block and its associated checksum from tape. Each recalculates the checksum from the data block and compares the result to the stored checksum. If checksums do not match at any point, the drive notifies the application software that an error has occurred.
The div = verify
option provides an additional layer of protection when writing data. When the write operation is complete, the host asks the tape drive to reverify the data. The drive then rescans the data, recalculates checksums, and compares the results to the checksums stored on the tape. The drive performs all operations internally, with no additional I/O (data is discarded), so there is no additional overhead on the host system. You can also use the Oracle HSM tpverify
(tape-verify) command to perform this step on demand.
To configure Data Integrity Validation, proceed as follows:
Log in to the Oracle HSM server as root
.
In the example, the metadata server is named samfs-mds
:
root@samfs-mds:~#
Make sure that the metadata server is running Oracle Solaris 11 or higher.
root@samfs-mds:~# uname -r
5.11
root@samfs-mds:~#
Make sure that the archival storage equipment defined in the Oracle HSM mcf
file includes compatible tape drives: StorageTek T10000C (minimum firmware level 1.53.315) or T10000D.
Idle all archiving processes, if any. Use the command samcmd aridle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
root@samfs-mds:~# samcmd aridle
root@samfs-mds:~#
Idle all staging processes, if any. Use the command samcmd stidle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
root@samfs-mds:~# samcmd stidle
root@samfs-mds:~#
Wait for any active archiving jobs to complete. Check on the status of the archiving processes using the command samcmd a
.
When archiving processes are Waiting for :arrun
, the archiving process is idle:
root@samfs-mds:~# samcmd a
Archiver status samcmd 6.0 14:20:34 Feb 22 2015
samcmd on samfs-mds
sam-archiverd: Waiting for :arrun
sam-arfind: ...
Waiting for :arrun
Wait for any active staging jobs to complete. Check on the status of the staging processes using the command samcmd u
.
When staging processes are Waiting for :strun
, the staging process is idle:
root@samfs-mds:~# samcmd u Staging queue samcmd 6.0 14:20:34 Feb 22 2015 samcmd on solaris.demo.lan Staging queue by media type: all sam-stagerd: Waiting for :strun root@mds1:~#
Idle all removable media drives before proceeding further. For each drive, use the command samcmd
equipment-number
idle
, where equipment-number
is the equipment ordinal number assigned to the drive in the /etc/opt/SUNWsamfs/mcf
file.
This command will allow current archiving and staging jobs to complete before turning drives off
, but will not start any new work. In the example, we idle four drives, with ordinal numbers 801
, 802
, 803
, and 804
:
root@samfs-mds:~# samcmd 801 idle root@samfs-mds:~# samcmd 802 idle root@samfs-mds:~# samcmd 803 idle root@samfs-mds:~# samcmd 804 idle root@samfs-mds:~#
Wait for running jobs to complete.
We can check on the status of the drives using the command samcmd r
. When all drives are notrdy
and empty
, we are ready to proceed.
root@samfs-mds:~# samcmd r Removable media samcmd 6.0 14:20:34 Feb 22 2015 samcmd on samqfs1host ty eq status act use state vsn li 801 ---------p 0 0% notrdy empty li 802 ---------p 0 0% notrdy empty li 803 ---------p 0 0% notrdy empty li 804 ---------p 0 0% notrdy empty root@samfs-mds:~#
When the archiver and stager processes are idle and the tape drives are all notrdy
, stop the library-control daemon. Use the command samd
stop
.
root@samfs-mds:~# samd stop root@samfs-mds:~#
Open the /etc/opt/SUNWsamfs/defaults.conf
file in a text editor. Uncomment the line #div = off
, if necessary, or add it if it is not present.
By default, div
(Data Integrity Validation) is off
(disabled).
In the example, we open the file in the vi
editor and uncomment the line:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... div = off
To enable Data Integrity Validation read, write, and verify operations, change the line #div
=
off
to div
=
on
, and save the file.
Data will be verified as each block is written and read, but the Oracle HSM archiver software will not verify complete file copies after they are archived.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... div = on :wq root@samfs-mds:~#
To enable the verify-after-write option of the Data Integrity Validation feature, change the line #div
=
off
to div
=
verify
, and save the file.
The host and the drive carry out Data Integrity Validation as each block is written or read. In addition, whenever a complete archive request is written out to tape, the drive re-reads the newly stored data and checksums, recalculates, and, compares the stored and calculated results.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... div = verify :wq root@samfs-mds:~#
Tell the Oracle HSM software to re-read the defaults.conf
file and reconfigure itself accordingly. Use the samd
config
command.
root@samfs-mds:~# /opt/SUNWsamfs/sbin/samd config
If you stopped Oracle HSM operations in an earlier step, restart them now using the samd
start
command.
root@samfs-mds:~# samd start root@samfs-mds:~#
Data Integrity Validation is now configured.
If you need to automate data integrity validation, go to "Configure Oracle HSM Periodic Media Verification".
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
You can set up Periodic Media Verification (PMV) for Oracle HSM archiving file systems. Periodic Media Verification automatically checks the data integrity of the removable media in a file system. It checks StorageTek T10000 media using StorageTek Data Integrity Validation and other drives using the widely supported SCSI verify(6)
command.
The Periodic Media Verification feature adds an Oracle HSM daemon, verifyd
, that periodically applies the tpverify
command, logs any errors detected, notifies administrators, and automatically performs specified recovery actions. You configure Periodic Media Verification by setting policy directives in a configuration file, verifyd.cmd
. Policies can specify the times when verification scans are run, the types of scan done, the libraries and drives that can be used, the tape volumes that should be scanned, and the actions that Oracle HSM takes when errors are detected. Oracle HSM can, for example, automatically re-archive files that contain errors and/or recycle tape volumes that contain errors.
Log in to the Oracle HSM server as root
.
In the example, the metadata server is named samfs-mds
:
root@samfs-mds:~#
If you have not already done so, configure Oracle HSM to support Data Integrity Validation (DIV) before proceeding.
Make sure that the metadata server is running Oracle Solaris 11 or higher.
root@samfs-mds:~# uname -r
5.11
root@samfs-mds:~#
Open the /etc/opt/SUNWsamfs/verifyd.cmd
file in a text editor.
In the example, we open the file in the vi
editor:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd # For additional information about the format of the verifyd.cmd file, # type "man verifyd.cmd". # Enable Oracle HSM Periodic Media Validation (PMV) pmv = off
To enable Periodic Media Verification, enter the line pmv = on
.
By default, Periodic Media Verification is off
. In the example, we set it on
:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
# For additional information about the format of the verifyd.cmd file,
# type "man verifyd.cmd".
# Enable Oracle HSM Periodic Media Validation (PMV)
pmv = on
Set a run time. Enter the line run_time =
always
to run verification continuously or run_time =
HH
MM
hhmm
DD
dd
, where HH
MM
and hhmm
are, respectively, starting and ending times and where DD
dd
are an optional starting and ending day.
HH
and hh
are hours of the day in the range 00-24
, MM
and mm
are numbers of minutes in the range 00-60
, and DD
and dd
are days of the week in the range [0-6]
, where 0
is Sunday and 6
is Saturday. The default is 2200 0500 6 0
.
But verification will not compete with more immediately important file system operations. The verification process automatically yields tape volumes and/or drives that are required by the archiver and stager. So, in the example, we set the run time to always
:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
# For additional information about the format of the verifyd.cmd file,
# type "man verifyd.cmd".
# Enable Oracle HSM Periodic Media Validation (PMV)
pmv = on
# Run all of the time. PMV will yield VSNs and drives when
# resources are wanted by the SAM-QFS archiver and stager.
run_time = always
Specify a verification method. Enter the line pmv_method =
specified-method
where specified-method
is one of the following:
The standard
method is specifically for use with Oracle StorageTek T10000C and later tape drives. Optimized for speed, the standard
method verifies the edges, beginning, end, and first 1,000 blocks of the media.
The complete
method is also for use with Oracle StorageTek T10000C and later tape drives. It verifies the media error correction code (ECC) for every block on the media.
The complete plus
is also for use with Oracle StorageTek T10000C and later tape drives. It verifies both the media error correction code (ECC) and the Data Integrity Validation checksum for each block on media (see "Configure Oracle HSM to Support Data Integrity Validation (DIV)").
The legacy
method can be used with all other tape drives and is used automatically when media is marked bad in the catalog and when drives do not support the method specified in the verifyd.cmd
file. It runs a 6-byte, fixed-block mode SCSI Verify Command, skipping previously logged defects. When a new permanent media error is found, the legacy
method skips to the next file and logs the newly discovered error in the media defects database.
The mir rebuild
method rebuilds the media information region (MIR) of an Oracle StorageTek tape cartridge if the MIR is missing or damaged. It works with media that is marked bad in the media catalog and is automatically specified when MIR damage is detected.
In the example, we are using LTO drives, so we specify legacy
:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
# resources are wanted by the SAM-QFS archiver and stager.
run_time = always
pmv_method = legacy
To use all available libraries and drives for verification, enter the line pmv_scan = all
.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_method = legacy
pmv_scan = all
To use all available drives in a specified library for verification, enter the line pmv_scan = library
equipment-number
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file.
In the example, we let the verification process use all drives in library 800
.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_method = legacy
pmv_scan = library 800
To limit the number of drives that the verification process can use in a specified library, enter the line pmv_scan = library
equipment-number
max_drives
number
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file and number is the maximum number of drives that can be used.
In the example, we let the verification process use at most 2
drives in library 800
:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_method = legacy
pmv_scan = library 800 max_drives 2
To specify the drives that the verification process can use in a specified library, enter the line pmv_scan = library
equipment-number
drive
drive-numbers
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file and drive-numbers
is a space-delimited list of the equipment numbers assigned to the specified drives in the mcf
file.
In the example, we let the verification process use drives 903
and 904
in library 900
:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_method = legacy
pmv_scan = library 900 drive 903 904
To specify the drives that the verification process can use in two or more libraries, enter the line pmv_scan =
library-specification
library-specification
...
, where equipment-number
is the equipment number assigned to the library in the file system's mcf
file and drive-numbers
is a space-delimited list of the equipment numbers assigned to the specified in the mcf
file.
In the example, we let the verification process use at most 2
drives in library 800
and drives 903
and 904
in library 900
:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_method = legacy
pmv_scan = library 800 max_drives 2 library 900 drive 903 904
To disable periodic media verification and prevent it from using any equipment, enter the line pmv_scan = off
.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_method = legacy
pmv_scan = off
To automatically flag the media for recycling once periodic media verification has detected a specified number of permanent errors, enter the line action = recycle perms
number-errors
, where number-errors
is the number of errors.
In the example, we configure Oracle HSM to flag the media for recycling after 10
errors have been detected:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_scan = all
action = recycle perms 10
To automatically re-archive files that contain bad blocks after errors have accumulated for a specified period, enter the line action = rearch age
time
, where time
is a space-delimited list of any combination of SECONDS
s
, MINUTES
m
, HOURS
h
, DAYS
d
, and/or YEARS
y
and SECONDS
, MINUTES
, HOURS
, DAYS
, and YEARS
are integers.
The oldest media defect must have aged for the specified period before the file system is scanned for files that need archiving. In the example, we set the re-archiving age to 1
(one) minute:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_scan = all
action = rearch age 1m
To mark the media as bad when periodic media verification detects a permanent media error and take no action otherwise, enter the line action =
none
.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd ... pmv_scan = all action = none
Specify the tape volumes that should be verified periodically. Enter the line pmv_vsns =
selection-criterion
, where selection-criterion
is all
or a space-delimited list of regular expressions that specify one or more volume serial numbers (VSNs).
The default is all
. In the example, we supply three regular expressions: ^VOL0[01][0-9]
and ^VOL23[0-9]
specify two sets volumes with volume serial numbers in the ranges VOL000
to VOL019
and VOL230
to VOL239
, respectively, while VOL400
specifies the volume with that specific volume serial number:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_scan = all
action = none
pmv_vsns = ^VOL0[01][0-9] ^VOL23[0-9] VOL400
Oracle HSM will not try to verify volumes if they need to be audited, if they are scheduled for recycling, if they are unavailable, if they are foreign (non-Oracle HSM) volumes, or if they do not contain data. Cleaning cartridges, volumes that are unlabeled, and volumes that have duplicate volume serial numbers are also excluded.
Define the desired verification policy. Enter the line pmv_policy = verified age
vertime
[
modified
age
modtime
]
[
mounted
age
mnttime
]
, where :
verified age
specifies the minimum time that must have passed since the volume was last verified.
modified
age
(optional) specifies the minimum time that must have passed since the volume was last modified.
mounted
age
(optional) specifies the minimum time that must have passed since the volume was last mounted.
The parameter values vertime
, modtime
, and mnttime
are combinations of non-negative integers and the following units of time: y
(years), m
(months), d
(days), H
(hours), M
(minutes), and S
(seconds).
Oracle HSM identifies and ranks candidates for verification based on the amount of time that has passed since the volume was last verified and, optionally, modified and/or mounted. The default policy is the single parameter, verified age 6m
(six months). In the example, we set the last-verified age to three months and the last-modified age to fifteen months:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_scan = all
action = none
pmv_vsns = ^VOL0[01][0-9] ^VOL23[0-9] VOL400
pmv_policy = verified age 3m modified age 15m
Save the /etc/opt/SUNWsamfs/verifyd.cmd
file, and close the editor.
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/verifyd.cmd
...
pmv_vsns = ^VOL0[01][0-9] ^VOL23[0-9] VOL400
pmv_policy = verified age 3m modified age 15m
:wq
root@mds1:~#
Check the verifyd.cmd
file for errors by entering the tpverify
-x
command. Correct any errors found.
The tpverify
-x
command reads the verifyd.cmd
and stops if it encounters an error:
root@mds1:~# tpverify -x Reading '/etc/opt/SUNWsamfs/verifyd.cmd'. PMV: off Run-time: Start Time: 2200 End Time: 0500 PMV Scan: all PMV Method: legacy STA Scan: off Action: none PMV VSNs: all PMV Policy: Last Verified Age: 6m root@mds1:~#
Restart the verification service using the new verifyd.cmd
file. Enter the command tpverify
-r
command.
root@mds1:~# tpverify -r root@mds1:~#
You have finished configuring periodic media verification.
If you need to enable WORM (Write Once Read Many) capability on the file system, see "Enabling Support for Write Once Read Many (WORM) Files".
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)".
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Write-once read-many (WORM) files are used in many applications for legal and archival reasons. WORM-enabled Oracle HSM file systems support default and customizable file-retention periods, data and path immutability, and subdirectory inheritance of the WORM setting. You can use either of two WORM modes:
standard compliance mode (the default)
The standard WORM mode starts the WORM retention period when a user sets UNIX setuid
permission on a directory or non-executable file (chmod 4000
directory
|
file
). Since setting setuid
(set user ID upon execution
) permission on an executable file presents security risks, files that also have UNIX execute permission cannot be retained using this mode.
emulation mode
The WORM emulation mode starts the WORM retention period when a user makes a writable file or directory read-only (chmod 444
directory
|
file
), so executable files can be retained.
Both standard and emulation modes have both a strict WORM implementation and a less restrictive, lite implementation that relaxes some restrictions for root
users. Both strict and lite implementations do not allow changes to data or paths once retention has been triggered on a file or directory. The strict implementations do not let anyone shorten the specified retention period (by default, 43,200 minutes/30 days) or delete files or directories prior to the end of the retention period. They also do not let anyone use sammkfs
to delete volumes that hold currently retained files and directories. The strict implementations are thus well-suited to meeting legal and regulatory compliance requirements. The lite implementations let root
users shorten retention periods, delete files and directories, and delete volumes using the file-system creation command sammkfs
. The lite implementations may thus be better choices when both data integrity and flexible management are primary requirements.
Take care when selecting a WORM implementation and when enabling retention on a file. In general, use the least restrictive option that is consistent with requirements. You cannot change from standard to emulation modes or vice versa. So choose carefully. If management flexibility is a priority or if retention requirements may change at a later date, select a lite implementation. You can upgrade from the lite version of a WORM mode to the strict version, should it later prove necessary. But you cannot change from a strict implementation to a lite implementation. Once a strict WORM implementation is in effect, files must be retained for their full specified retention periods. So set retention to the shortest value consistent with requirements.
You enable WORM support on a file system using mount options. Proceed as follows.
Log in as root
.
root@solaris:~#
Back up the operating system's /etc/vfstab
file.
root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Open the /etc/vfstab
file in a text editor and locate the entry for the Oracle HSM file system for which you want to enable WORM support.
In the example, we open the /etc/vfstab
file in the vi
editor and locate the archiving file system worm1
:
root@solaris:~# vi /etc/vfstab
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes -
To enable the strict implementation of the standard WORM compliance mode, enter the worm_capable
option in the Mount
Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes worm_capable
To enable the lite implementation of the standard WORM compliance mode, enter the worm_lite
option in the Mount
Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes worm_lite
To enable the strict implementation of the WORM emulation mode, enter the worm_emul
option in the Mount
Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes worm_emul
To enable the lite implementation of the WORM emulation mode, enter the emul_lite
option in the Mount
Options
column of the vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - yes emul_lite
To change the default retention period for files that are not explicitly assigned a retention period, add the def_retention=
period
option to the Mount
Options
column of the vfstab
file, where period
takes one of the forms explained in the following paragraph.
The value of period
can take any of three forms:
permanent
or 0
specifies permanent retention.
YEARS
y
DAYS
d
HOURS
h
MINUTES
m
where YEARS
, DAYS
, HOURS
, and MINUTES
are non-negative integers and where specifiers may be omitted. So, for example, 5y3d1h4m
, 2y12h
, and 365d
are all valid.
MINUTES
where MINUTES
is an integer in the range [1-2147483647]
.
Set a default retention period if you must set retention periods that extend beyond the year 2038. UNIX utilities such as touch
use signed, 32-bit integers to represent time as the number of seconds that have elapsed since January 1, 1970. The largest number of seconds that a 32-bit integer can represent translates to January 18, 2038 at 10:14 PM
If a value is not supplied, def_retention
defaults to 43200
minutes (30 days). In the example, we set the retention period for a standard WORM-capable file system to 777600
minutes (540 days):
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
worm1 - /worm1 samfs - no worm_capable,def_retention=777600
Save the vfstab
file, and close the editor.
The file system is WORM-enabled. Once one or more WORM files are resident in the file system, the Oracle HSM software will update the file system superblock to reflect the WORM capability. Any subsequent attempt to rebuild the file system with sammkfs
will fail if the file system has been mounted with the strict worm_capable
or worm_emul
mount option.
If you need to interwork with systems that use LTFS or if you need to transfer large quantities of data between remote sites, see "Enabling Support for the Linear Tape File System (LTFS)"
If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
Oracle HSM can import data from and export data to Linear Tape File System (LTFS) volumes. This capability facilitates interworking with systems that use LTFS as their standard tape format. It also eases transfer of very large volumes of data between remote Oracle HSM sites, when typical wide-area network (WAN) connections are too slow or too expensive for the task.
Note that the Oracle HSM software supports but does not include LTFS functionality. To use LTFS file systems, the host's Solaris operating system must include the SUNWltfs
package. If necessary, download and install the SUNWltfs
package before proceeding further.
For information on using and administering LTFS volumes, see the samltfs
man page and the Oracle Hierarchical Storage Manager and StorageTek QFS Software Maintenance and Administration Guide.
To enable Oracle HSM LTFS support, proceed as follows:
Log in to the Oracle HSM metadata server as root
.
root@samfs-mds:~#
Idle all archiving processes, if any. Use the command samcmd aridle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
root@samfs-mds:~# samcmd aridle
root@samfs-mds:~#
Idle all staging processes, if any. Use the command samcmd stidle
.
This command will allow current archiving and staging to complete, but will not start any new jobs:
root@samfs-mds:~# samcmd stidle
root@samfs-mds:~#
Wait for any active archiving jobs to complete. Check on the status of the archiving processes using the command samcmd a
.
When archiving processes are Waiting for :arrun
, the archiving process is idle:
root@samfs-mds:~# samcmd a
Archiver status samcmd 6.0 14:20:34 Feb 22 2015
samcmd on samfs-mds
sam-archiverd: Waiting for :arrun
sam-arfind: ...
Waiting for :arrun
Wait for any active staging jobs to complete. Check on the status of the staging processes using the command samcmd u
.
When staging processes are Waiting for :strun
, the staging process is idle:
root@samfs-mds:~# samcmd u Staging queue samcmd 6.0 14:20:34 Feb 22 2015 samcmd on solaris.demo.lan Staging queue by media type: all sam-stagerd: Waiting for :strun root@solaris:~#
Idle all removable media drives before proceeding further. For each drive, use the command samcmd
equipment-number
idle
, where equipment-number
is the equipment ordinal number assigned to the drive in the /etc/opt/SUNWsamfs/mcf
file.
This command will allow current archiving and staging jobs to complete before turning drives off
, but will not start any new work. In the example, we idle four drives, with ordinal numbers 801
, 802
, 803
, and 804
:
root@samfs-mds:~# samcmd 801 idle root@samfs-mds:~# samcmd 802 idle root@samfs-mds:~# samcmd 803 idle root@samfs-mds:~# samcmd 804 idle root@samfs-mds:~#
Wait for running jobs to complete.
We can check on the status of the drives using the command samcmd r
. When all drives are notrdy
and empty
, we are ready to proceed.
root@samfs-mds:~# samcmd r Removable media samcmd 6.0 14:20:34 Feb 22 2015 samcmd on samqfs1host ty eq status act use state vsn li 801 ---------p 0 0% notrdy empty li 802 ---------p 0 0% notrdy empty li 803 ---------p 0 0% notrdy empty li 804 ---------p 0 0% notrdy empty root@samfs-mds:~#
When the archiver and stager processes are idle and the tape drives are all notrdy
, stop the library-control daemon. Use the command samd
stop
.
root@samfs-mds:~# samd stop root@samfs-mds:~#
Download and review the LTFS Open Edition (LTFS-OE) documentation for the current version of the software. Documents are available at https://oss.oracle.com/projects/ltfs/documentation/
.
At a minimum, review the README.txt
file and the installation document for Solaris (the operating system used on the metadata server): INSTALL.solaris
. Check the README.txt
file for hardware compatibility information.
Download, install, and configure the LTFS-OE packages. Follow the instructions in the INSTALL
document.
Download packages from https://oss.oracle.com/projects/ltfs/files/
or as directed in the INSTALL
document.
Once LTFS-OE is installed, open the file /etc/opt/SUNWsamfs/defaults.conf
in a text editor.
In the example, we open the file in the vi
editor:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the
# appropriate line (remove the '#' character from the beginning of the line)
# and change the value.
...
In the defaults.conf
file, add the line ltfs =
mountpoint
workers
volumes
, where:
mountpoint
is the directory in the host file system where the LTFS file system should be mounted.
workers
is an optional maximum number of drives to use for LTFS.
volumes
is an optional maximum number of tape volumes per drive.
In the example, we specify the LTFS mount point s /mnt/ltfs
and accept the defaults for the other parameters:
root@samfs-mds:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... ltfs = /mnt/ltfs :wq root@samfs-mds:~#
save the defaults.conf
file, and close the editor.
...
ltfs = /mnt/ltfs
:wq
root@samfs-mds:~#
Tell the Oracle HSM software to reread the defaults.conf
file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.
root@samfs-mds:~# /opt/SUNWsamfs/sbin/samd config
If you stopped Oracle HSM operations in an earlier step, restart them now using the samd
start
command.
root@samfs-mds:~# samd start
Oracle HSM support for LTFS is now enabled. If you have additional requirements, such as multiple-host file-system access or high-availability configurations, see "Beyond the Basics".
Otherwise, go to "Configuring Notifications and Logging".
This completes basic installation and configuration of Oracle HSM file systems. At this point, you have set up fully functional file systems that are optimally configured for a wide range of purposes.
The remaining chapters in this book address more specialized needs. So, before you embark on the additional tuning and feature implementation tasks outlined below, carefully assess your requirements. Then, if you need additional capabilities, such as high-availability or shared file-system configurations, you can judiciously implement additional features starting from the basic configurations. But if you find that the work you have done so far can meet your needs, additional changes are unlikely to be an improvement. They may simply complicate maintenance and administration.
If applications transfer unusually large or unusually uniform amounts of data to the file system, you may be able to improve file system performance by setting additional mount options. See "Tuning I/O Characteristics for Special Needs" for details.
If you need to configure shared access to the file system, see "Accessing File Systems from Multiple Hosts Using Oracle HSM Software" and/or "Accessing File Systems from Multiple Hosts Using NFS and SMB/CIFS".
If you need to configure a high-availability QFS file system or Oracle HSM archiving file system, see "Preparing High-Availability Solutions".
If you need to configure an Oracle HSM archiving file system to share archival storage hosted at a remote location, see "Configuring SAM-Remote".
If you plan on using the sideband database feature, go to "Configuring the Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".