4 Managing Files and Directories

This chapter addresses the following file and directory management topics:

Setting Oracle HSM File Attributes

The ability to interact with users via a familiar interface—the standard UNIX file system—is a key advantage of Oracle Hierarchical Storage Manager and StorageTek QFS Software. Most users do not even need to be aware of the differences. However, Oracle HSM file systems can provide advanced users with significantly greater capabilities when necessary. Oracle HSM file attributes let users optimize the behavior of the file system for working with individual files and directories. Users who understand their workloads and the characteristics of their data can significantly improve performance file-by-file. Users can, for example, specify direct or buffered I/O based on the characteristics of the data in a given file or directory. They can preallocate file-system space so that large files can be written more sequentially and can specify the stripe width used when writing particular files or directories.

The setfa command sets these file attributes on both new and existing files and directories. The command creates specified files or directories that do not exist. When applied to a directory, it sets the specified attributes on all files and subdirectories in the directory. Subsequently created files and directories inherit these attributes.

The basic tasks are outlined below (for additional information, see the setfa man page).

Restore Default File Attribute Values

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. To reset the default attribute values on a file, use the command setfa -d file, where file is the path and name of the file.

    In the example, we reset defaults on the file /samfs1/data/2014/03/series3.15:

    user@mds1:~# setfa -d /samfs1/data/2014/03/series3.15
    
  3. To recursively reset the default attribute values on a directory and all of its contents, use the command setfa -r directory, where directory is the path and name of the directory.

    In the example, we reset the defaults on the subdirectory /samfs1/data/2014/02:

    user@mds1:~# setfa -r /samfs1/data/2014/02/
    
  4. Stop here.

Preallocate File-System Space

Preallocating space for a file insures that there is enough room to write out the entire file sequentially when the file is written. Writing and reading large files in sequential blocks improves efficiency and overall performance by reducing the overhead associated with seeking and with buffering smaller, more scattered blocks of data. Preallocation is thus best for writing a predictable number of large blocks of data. Preallocated but unused space remains part of the file when the file closes and cannot be freed for other use until the entire file is deleted.

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. If you need to preallocate space for writing to an existing file that already contains data, use the command setfa -L number-bytes file, where number-bytes is an integer or an integer plus k for kilobytes, m for megabytes, or g for gigabytes, and where file is the name of the file.

    The command setfa -L uses standard allocation.It supports striping. Pre-allocated files can grow beyond their pre-allocated size. In the example, we preallocate 121 megabytes for the existing file tests/series119b:

    user@mds1:~# setfa -L 121m tests/series119b
    
  3. If you need to preallocate space for writing a new file that has no storage blocks assigned, use the command setfa -l number-bytes file, where:

    • l is the lower-case letter "L".

    • number-bytes is an integer or an integer plus k for kilobytes, m for megabytes, or g for gigabytes.

    • file is the name of the file.

    The command setfa -l preallocates the specified number of bytes. The resulting files are fixed at their preallocated size and can neither grow beyond nor shrink below their pre-allocated size. In the example, we create the file data/2014/a3168445 and preallocate two gigabytes of space for its content:

    user@mds1:~# setfa -l 2g data/2014/a3168445
    
  4. Stop here.

Specify Round-Robin or Striped Allocation for a File

By default, Oracle HSM file systems use the allocation method specified for the file system at mount time. But users can specify a preferred allocation method—round-robin or striping with a specified stripe-width—for specified directories or files.

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. To specify round-robin allocation, specify a stripe width of 0 (zero). Use the command setfa -s 0 directory-or-file, where directory-or-file is the name of the directory or file that will be written using the specified allocation method.

    A stripe width of 0 (zero) specifies unstriped, round-robin allocation. The file system starts writing a file on the next available device. It writes successive disk allocation units (DAUs) to the file on the same device until the file is complete or the device runs out of space. If the device runs out of space, the file system moves to the next available device and continues to write disk allocation units. The process repeats until the file is complete. In the example, we specify round-robin allocation for all files written to the data/field-reports directory:

    user@mds1:~# setfa -s 0 data/field-reports
    
  3. To specify striped allocation, specify a stripe width. Use the command setfa -s stripe-width directory-or-file, where stripe-width is an integer in the range [1–255] and directory-or-file is the name of the directory or file that will be written using the specified allocation method.

    A stripe width in the range [1–255] specifies striped allocation. The file system writes the number of disk allocation units (DAUs) specified in the stripe width to multiple devices in parallel until the file is complete. In the example, we specify striped allocation with a stripe width of 1 for all files written to the directory, so the file allocation for all files written to the data/field-reports directory data/2014/, so the file system will write one disk allocation unit to each available device until the file is complete:

    user@mds1:~# setfa -s 1 data/2014/
    
  4. Stop here.

Allocate File Storage on a Specified Stripe-Group Device

A user can specify the stripe group device where round-robin or striped allocation should begin. An Oracle HSM stripe group is a logical volume that stripes data across multiple physical volumes. When round-robin file allocation is in effect, the entire file is written on the designated stripe group. When striped allocation is in effect, the first allocation is made on the designated stripe group.

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. To write an entire file to a specific stripe group, use round-robin allocation. Use the command setfa -s 0 -gstripe-group-number, where stripe-group-number is an integer in the range [0-127] that identifies the specified stripe group.

    In the example, we specify round-robin allocation starting on stripe group 0 when writing the file reports/site51:

    user@mds1:~# setfa -s 0 -g0 reports/site51
    
  3. To stripe a file across a number of stripe groups starting from a specified stripe group, use striped allocation. Use the command setfa -s stripe-width -g stripe-group-number, where stripe-width is an integer in the range [1–255] that specifies a number of disk allocation units and stripe-group-number is an integer in the range [0-127] that identifies the specified stripe group.

    In the example, we specify striped allocation for the file assessments/site52. We specify three disk allocation units per group, starting from stripe group 21:

    user@mds1:~# setfa -s 3 -g21 assessments/site52
    
  4. Stop here.

Using Extended File Attributes

Like other Solaris and Linux file systems, Oracle HSM file systems support extended file attributes. Extended attributes hold arbitrary metadata that is associated with a file by a user or application, rather than by the file system itself. Extended attributes have been used to hold file digests, the names of authors and source applications, and the character encoding used by text files.

Starting in Release 6.1, Oracle HSM stores small extended attribute files that contain 464 or fewer characters in extension inodes within the metadata partition, instead of using a block in the data partition. The new approach significantly improves file-system performance when extended attributes are in use and file-system metadata is being stored on faster devices, such as flash storage.

Extended file attributes are automatically enabled whenever you create a new file system or restore an old file system from a recovery point (samfsdump) file. For more information on using extended attributes, see the Solaris fsattr(5) and Linux xattr(7) man pages.

Accommodating Large Files

Oracle HSM file systems are particularly well suited to working with unusually large files. This section covers the following topics:

Managing Disk Cache With Very Large Files

When manipulating very large files, pay careful attention to the size of available disk cache. If you try to write a file that is larger than your disk cache, non-archiving file systems return an ENOSPC error, while archiving file systems simply waits for space that may never become available, causing applications to block.

Oracle HSM provides two possible alternatives to increasing the size of the disk cache:

Segmenting Files

When you set the Oracle HSM segmentation attribute on a file, the file system breaks the file down into segments of a specified size and manages access requests so that, at any given time, only the currently required segment resides on disk. The remainder of the file resides on removable media.

Segmentation of large files has a number of advantages:

  • Users can create and access files that are larger than the available disk cache.

    Since only segments reside in cache at any given time, you only need to choose a segment size that fits in the disk cache. The complete file can grow to any size that the media can accommodate.

  • Users can access large files that have been released from the disk cache more quickly. Staging a portion of a large file to disk is much faster than waiting for the entire file to stage.

  • The speed and efficiency of archiving can improve when files are segmented, because only changed portions of each file are re-archived.

  • Files can be striped across removable media volumes mounted on multiple drives. Archiving and staging operations can then proceed in parallel, further improving performance.

There are two limitations:

  • You cannot segment files in a shared file system.

  • You cannot segment binary executable files, because the Solaris memory-mapping function, mmap(), cannot map the bytes in a segmented file to the address space of a process.

To create segmented files, proceed as follows:

Segment a File

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Select or, if necessary, create the file(s) that you need to segment.

  3. To segment a single file, use the command segment [-s stage_ahead] -l segment_size file-path-name, where:

    • stage_ahead (optional) is an integer specifying the number of consecutive extra segments to read when a given segment is accessed. Well-chosen values can improve utilization of the system page cache and thus improve I/O performance. The default is 0 (disabled).

    • segment_size is an integer and a unit that together specify the size of each segment. Supported units are k (kilobytes), m (megabytes, and g (gigabytes). The minimum size is one megabyte (1m or 1024k).

    • file-path-name is the path and file name of the file.

    For full details, see the segment man page. In the example, we segment the file 201401.dat using a 1.5-megabyte (1536k) segment size:

    user@mds1:~# segment -l 1536k 201401.dat 
    
  4. To recursively segment the files in a directory and all of its subdirectories, use the command segment [-s stage_ahead] -l segment_size -r directory-path-name, where directory-path-name is the path and name of the starting directory.

    In the example, we segment all files in the /hsm/hsmfs1/data directory and its subdirectories using a 1-megabyte (1m) segment size:

    user@mds1:~# segment -l 1m -r /hsm/hsmfs1/data 
    
  5. Stop here.

Stripe a Segmented File Across Multiple Volumes

You configure segmented files for striped I/O by assigning them to an archive set that specifies multiple drives. Proceed as follows:

  1. Log in to the host as root.

    In the example, we log in to the metadata server mds1:

    root@mds1:~# 
    
  2. Open the file /etc/opt/SUNWsamfs/archiver.cmd in a text editor.

    In the example, we use the vi editor to open the file:

    root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
    # Configuration file for Oracle HSM archiving file systems ...
    
  3. To stripe segmented files across drives, specify the use of at least two drives for each copy of each archive set that contains segmented files. In the archiver.cmd file, locate the params section. Make sure that the parameters for each copy include the -drives number parameter, where number is two (2) or more. Make any required changes, save the file, and close the editor.

    In the example, the archiver.cmd file specifies two drives for all three copies of all configured archive sets:

    root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
    # Configuration file for Oracle HSM archiving file systems ...
    ...
    #-----------------------------------------------------------------------
    # Copy Parameters
    params
    allsets -sort path -offline_copy stageahead -reserve set
    allsets.1 -startage 10m -drives 2
    allsets.2 -startage 24h -drives 2
    allsets.3 -startage 48h -drives 2
    endparams 
    ...
    :wq
    root@mds1:~# 
    
  4. Check the archiver.cmd file for errors. Use the command archiver -lv.

    The archiver -lv command prints the archiver.cmd file to screen and generates a configuration report if no errors are found. Otherwise, it notes any errors and stops.

    root@mds1:~# archiver -lv
    Reading '/etc/opt/SUNWsamfs/archiver.cmd'.
    ...
     Total space available:  300T
    root@mds1:~# 
    
  5. Tell the Oracle HSM software to re-read the archiver.cmd file and reconfigure itself accordingly. Use the /opt/SUNWsamfs/sbin/samd config command.

    root@mds1:~# samd config
    
  6. Stop here.

Using Removable Media Files for Large Data Sets

Oracle HSM removable media files reside entirely on removable media and thus never occupy space in the file-system disk cache. The file system reads removable media files directly into memory. So the storage medium does not limit the size of the file at all. Removable files that exceed the capacity of a single media cartridge can become multiple-cartridge, volume overflow files. The file system reads and writes data to the media sequentially.

In most respects, removable media files look like typical UNIX files. They have permissions, a user name, a group name, and a file size. When a user or application requests a removable media file, the system automatically mounts the corresponding volume(s) and the user accesses the data from memory, much as if the data were on disk. But removable media files differ from other Oracle HSM files in two major ways: they are never archived by the Oracle Hierarchical Storage Manager software, and they are not supported over NFS.

The Oracle Hierarchical Storage Manager software does not manage removable media files. The files are never archived or released, and the media that contains them is never recycled. This makes removable media files useful when you need to use removable media for purposes other than archiving. These files are ideal for creating removable disaster-recovery volumes that back up your Oracle HSM configuration and metadata dump files. You can also read data from foreign volumes (volumes created by other applications) by loading the volume read-only and reading the files into memory as removable media files.

Since removable media files cannot be released and the associated volume(s) cannot be recycled, you should generally segregate removable media files on dedicated volumes, rather than mixing them in with archive copies.

Create a Removable Media or Volume Overflow File

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Select the Oracle HSM file system, path, and file name for the removable media file.

    Once the removable media file is created, the file system will address requests for this path and file name using data from removable media.

  3. Create the removable media file. Use the command request -m media-type -v volume-specifier data-file, where mediatype is one of the two-character media type codes listed in Appendix A, data-file is the path and name that you selected for the removable media file, and volume-specifier is one of the following:

    • a volume serial number or a slash-delimited list of volume serial numbers

      In the first example, we create file1 on LTO (li) volume VOL080:

      user@mds1:~# request -m li -v VOL080 /hsm/hsmfs1/data/file1
      

      In the second example, we create file2 on LTO (li) volumes VOL080, VOL082, and VOL098:

      user@mds1:~# request -m li -v VOL081/VOL082/VOL098 /hsm/hsmfs1/data/file2
      
    • -l volume-list-file, where volume-list-file is the path and name of a file that, on each line, lists a single volume serial number and, optionally, a space and a decimal or hexadecimal number specifying a starting position on the specified volume (prefix hexadecimals with 0x).

      In the example, using the vi editor, we create file3 on the LTO (li) volumes listed in the file vsnsfile3:

      user@mds1:~# vi vsnsfile3
      VOL180
      VOL181
      VOL182
      :wq
      user@mds1:~# request -m li -v -l vsnsfile3 /hsm/hsmfs1/data/file3
      
  4. Stop here.

Read a Foreign Tape Volume as a Removable Media File

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Make sure that the foreign tape is barcoded, write protected, opened read-only, and positioned to 0.

  3. Select the Oracle HSM file system, path, and file name for the removable media file.

    Once the removable media file is created, the file system will address requests for this path and file name using data from the foreign tape.

  4. Create the removable media file using the -N (foreign media) option. Use the command request -m media-type -N -v volume-serial-number data-file, where:

    • mediatype is one of the two-character media type codes listed in Appendix A.

    • volume-serial-number is the volume serial number of the foreign tape.

    • data-file is the path and name for the removable media file.

    In the example, we create a removable media file for the foreign LTO (li) volume FOR991:

    user@mds1:~# request -m li -N -v FOR991 /hsm/hsmfs1/foreignfile
    
  5. Stop here.

Working with Linear Tape File System (LTFS) Volumes

Linear Tape File System is a self-describing tape format that organizes the data on sequential-access tape media into a file system, so that files can be accessed much as if they resided on random-access disk. Oracle HSM provides extensive support for LTFS. The software lets you use LTFS files in Oracle HSM file systems and supplies tools for creating, accessing, and managing LTFS media.

This section addresses the following topics:

Importing LTFS Media Into the Library

The Oracle HSM software automatically recognizes LTFS media. So you can import LTFS volumes with the samimport command, just as you would any other media. See "Importing and Exporting Removable Media" and the import (1m) man page for additional information.

Attaching LTFS Directories and Files to an Oracle HSM File System

The Oracle HSM software can attach Linear Tape File System (LTFS) directories and files to an Oracle HSM file system, so that they can be accessed and managed as if they were themselves Oracle HSM files. The software copies the LTFS meta-data from the LTFS volume to an empty directory in an Oracle HSM file system. Using this metadata, Oracle HSM manages the LTFS media and files as it would an archived Oracle HSM file. LTFS files are staged from the LTFS media to the Oracle HSM disk cache for use, either when users access them or all at once, as soon as the LTFS metadata is in place. The Oracle HSM file system's archiving and space-management policies apply as they would for any Oracle HSM file.

This section describes the following tasks:

Making LTFS Files Accessible On Demand

When you attach LTFS files to an Oracle HSM file system, the Oracle HSM software copies file-system metadata from the LTFS volume to a specified directory in the Oracle HSM file system. Files will then be staged to the disk cache when users access them. To attach LTFS files, proceed as follows:

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. In the Oracle HSM file system that will host the LTFS files, create the directory that will hold the LTFS metadata.

    In the example, we create the directory ltfs1/ under the file-system mount point /hsm/hsmfs1:

    user@mds1:~# mkdir /hsm/hsmfs1/ltfs1
    user@mds1:~# 
    
  3. Attach the LTFS files to the Oracle HSM file system. Use the command samltfs attach LTFS-media-type.LTFS-volume-serial-number SAMQFS-directory, where:

    • LTFS-media-type is the two-character media type code for the type of media that holds the LTFS data (see Appendix A).

    • LTFS-volume-serial-number is the six-character, alphanumeric volume serial number of the LTFS volume.

    • The specified media type and volume serial number identify a volume that the catalog lists as an LTFS volume.

      In the Oracle HSM catalog, LTFS media are unlabeled and marked non-SAM and tfs.

    • SAMQFS-directory is the path and name of the directory that will hold LTFS metadata.

    In the example, we attach LTO (li) volume TFS233:

    user@mds1:~# samltfs attach li.TFS233 /hsm/hsmfs1/ltfs1
    user@mds1:~# 
    
  4. Stop here.

Making LTFS Files Immediately Accessible in the Disk Cache

When you ingest LTFS files into an Oracle HSM file system, the Oracle HSM software copies file-system metadata from the LTFS volume to a specified directory in the Oracle HSM file system and immediately stages all files to the disk cache. To ingest LTFS files, proceed as follows:

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. In the Oracle HSM file system that will host the LTFS files, create the directory that will hold the LTFS metadata.

    In the example, we create the directory ltfs2/ under the file-system mount point /hsm/hsmfs1:

    user@mds1:~# mkdir /hsm/hsmfs1/ltfs2
    user@mds1:~# 
    
  3. Ingest the LTFS files into the Oracle HSM file system. Use the command samltfs ingest LTFS-media-type.LTFS-volume-serial-number SAMQFS-directory, where:

    • LTFS-media-type is the two-character media type code for the type of media that holds the LTFS data (see Appendix A).

    • LTFS-volume-serial-number is the six-character, alphanumeric volume serial number of the LTFS volume.

    • The specified media type and volume serial number identify a volume that the catalog lists as an LTFS volume.

      In the Oracle HSM catalog, LTFS media are unlabeled and marked non-SAM and tfs.

    • SAMQFS-directory is the path and name of the directory that hold LTFS metadata.

    In the example, we ingest LTO (li) volume TFS234:

    user@mds1:~# samltfs ingest li.TFS234 /hsm/hsmfs1/ltfs2
    user@mds1:~# 
    
  4. Stop here.

Accessing LTFS Media Using Oracle HSM Software

Oracle HSM software can also carry out the following tasks using the LTFS mount point specified in the Oracle HSM defaults.conf file:

Load an LTFS Volume Into a Tape Drive and Mount the LTFS File System

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Load the LTFS volume into a tape drive and mount the file system on the mount point specified in the defaults.conf file. Use the command samltfs load LTFS-media-type.LTFS-volume-serial-number, where:

    • LTFS-media-type is the two-character media type code for the type of media that holds the LTFS data (see Appendix A).

    • LTFS-volume-serial-number is the six-character, alphanumeric volume serial number of the LTFS volume.

    • The specified media type and volume serial number identify a volume that the catalog lists as an LTFS volume.

      In the Oracle HSM catalog, LTFS media are unlabeled and marked non-SAM and tfs.

    In the example, we load LTO (li) volume TFS434 and mount it on the directory specified in the defaults.conf file, /mnt/ltfs:

    user@mds1:~# samltfs load li.TFS234
    
  3. Stop here.

Unmount an LTFS File System and Unload the Volume from the Tape Drive

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Unmount the LTFS file system and unload the corresponding volume from the tape drive. Use the command samltfs unload LTFS-media-type.LTFS-volume-serial-number, where:

    • LTFS-media-type is the two-character media type code for the type of media that holds the LTFS data (see Appendix A).

    • LTFS-volume-serial-number is the six-character, alphanumeric volume serial number of the LTFS volume.

    • The specified media type and volume serial number identify an LTFS volume that the catalog lists as an LTFS volume.

      In the Oracle HSM catalog, LTFS media are unlabeled and marked non-SAM and tfs.

    In the example, we unmount the LTFS file system and unload LTO (li) volume TFS435:

    user@mds1:~# samltfs unload li.TFS435
    
  3. Stop here.

Managing LTFS Media Using Oracle HSM Software

The Oracle HSM software provides the basic tools needed for carrying out the following LTFS administrative tasks:

Format a Volume as an LTFS File System

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Partition and format a removable media volume for the LTFS file system. Use the command samltfs mkltfs media-type.volume-serial-number, where:

    • media-type is the two-character media type code for an LTFS-compatible type of media (see Appendix A).

    • volume-serial-number is the six-character alphanumeric volume serial number of the volume.

    In the example, we partition LTO (li) volume VOL234 and format it as an LTFS volume:

    user@mds1:~# samltfs mkltfs li.VOL234
    
  3. Stop here.

Erase LTFS Data and Remove LTFS Formatting and Partitions from a Volume

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Erase the LTFS volume and restore it to general use. Use the command samltfs unltfs media-type.volume-serial-number, where:

    • media-type is the two-character media type code for an LTFS-compatible type of media (see Appendix A).

    • volume-serial-number is the six-character alphanumeric volume serial number of the volume.

    In the example, we erase the LTFS file system data and metadata and remove the partitions on LTO (li) volume VOL234:

    user@mds1:~# samltfs unltfs li.VOL234
    
  3. Stop here.

Check the Integrity of an LTFS File System

  1. Log in to the file system host.

    In the example, we log in to the metadata server mds1:

    user@mds1:~# 
    
  2. Check the integrity of the LTFS file system. Use the command samltfs ltfsck LTFS-media-type.LTFS-volume-serial-number, where:

    • LTFS-media-type is the two-character media type code for the type of media that holds the LTFS data (see Appendix A).

    • LTFS-volume-serial-number is the six-character, alphanumeric volume serial number of the LTFS volume.

    • The specified media type and volume serial number identify an LTFS volume that the catalog lists as an LTFS volume.

      In the Oracle HSM catalog, LTFS media are unlabeled and marked non-SAM and tfs.

    In the example, we check the LTFS file system on LTO (li) volume VOL234:

    user@mds1:~# samltfs ltfsck li.VOL234
    
  3. Stop here.

Display LTFS Configuration and Status Information

To display the configuration and status of LTFS, use the command samltfs status.

user@mds1:~# samltfs status

Managing Directories and Files in SMB/CIFS Shares

Using SMB/CIFS shares, administrators can make the directories and files in Oracle HSM UNIX file systems available to Microsoft Windows clients. This section addresses the following topics:

Managing System Attributes in SMB/CIFS Shares

System attributes support SMB/CIFS file sharing by associating Oracle HSM files with non-UNIX metadata that can be interpreted by Microsoft Windows file systems. This section starts with a brief overview of the system attributes supported by Oracle HSM. It then provides basic instructions for the following tasks:

Oracle HSM Supported System Attributes

System attributes are Boolean (true or false) values expressed by an attribute name with the value true or the negation of the name, noname, with the value false. Oracle HSM provides the following system attributes in support of SMB/CIFS file sharing:

  • appendonly means that users can only append data to the file. noappendonly means that this restriction is not in effect.

  • archive means that the file has changed since it was last copied or backed up. noarchive means that the file has not changed since it was last copied or backed up. Oracle HSM does not currently use this attribute.

  • hidden means that the file is not displayed in file listings by default. nohidden means that the file is displayed by default.

  • immutable means that the directory or file and its contents cannot be changed or deleted. noimmutable means that the directory or file can be changed or deleted.

  • nodump means that the file cannot be backed up. nonodump means that the file can be backed up. Oracle Solaris does not use this attribute.

  • nounlink means that the file or the directory and its contents cannot be deleted or renamed. nonounlink means that the file or the directory and its contents can be deleted or renamed.

  • offline means that the file has been released from an Oracle HSM file system. Microsoft Windows systems will not preview the file. nooffline means that the file is online and has not been released from an Oracle HSM file system.

  • readonly means that the file cannot be deleted or modified. noreadonly means that the file can be deleted or modified. The attribute is ignored when applied to directories.

  • sparse means that the stored file contains only non-zero data, with zeroes reduced to ranges that are restored by the file system when the file is accessed or copied to a file system that does not support sparse files. nosparse means that the file is not sparse.

  • system means that the file is critical to the Microsoft Windows operating system, must not be altered or deleted, and should not be displayed in listings by default. nosystem means that the file is not a system file.

Display System Attributes

To view the system attributes of an Oracle HSM file, use the Solaris command ls -/v file, where file is the path and name of the file.

In the example, we list system attributes for the file /hsm/hsmfs1/docs/plan.odt:

user@mds1:~# ls -/v /hsm/hsmfs1/documents/master-plan.odt
-rw-r--r--   1 root root  40560 Mar 4 15:52 /hsm/hsmfs1/docs/plan.odt
{archive,nohidden,noreadonly,nosystem,noappendonly,nonodump,noimmutable,nonounlink, nooffline,nosparse}
user@mds1:~# 

Modify System Attributes

To change a system attribute value for a file to a specified value, use the Solaris command chmod S+v{attributes), where attributes is a comma-delimited list of Oracle HSM supported system attributes.

See the chmod man page for a comprehensive explanation of syntax and available options. In the example, we change the archive attribute from noarchive (false) to archive (true):

root@mds1:~# ls -/v /hsm/hsmfs1/documents/master-plan.odt
-r-xr-xr-x 1 root root 40561 Mar 4 15:52 /hsm/hsmfs1/documents/master-plan.odt
{noarchive,nohidden,readonly,nosystem,noappendonly,nonodump,noimmutable, nonounlink,offline,nosparse}
root@mds1:~# chmod S+v{archive} /hsm/hsmfs1/documents/master-plan.odt
root@mds1:~# ls -/v /hsm/hsmfs1/documents/master-plan.odt
-r-xr-xr-x 1 root root 40561 Mar 4 15:52 /hsm/hsmfs1/documents/master-plan.odt
{archive,nohidden,readonly,nosystem,noappendonly,nonodump,noimmutable, nonounlink,offline,nosparse}
root@mds1:~# 

Administering Access Control Lists

An Access Control List (ACL) is a table that defines access permissions for a file or directory. Each record or Access Control Entry (ACE) in the table defines the access rights of a particular user, group, or class of users or groups. By default, new file systems that you create with Oracle HSM Release 6.1.4 use the Access Control List (ACL) implementation introduced in Network File System (NFS) version 4 and Solaris 11.

A comprehensive account of Solaris ACL administration, syntax, and usage is outside the scope of this document. For full information, see the following references: