8 Accessing File Systems from Multiple Hosts

Oracle HSM file systems can be shared among multiple hosts in any of several ways. Each approach has particular strengths in some situations and notable drawbacks in others. So the method that you choose depends on your specific requirements. Sharing methods include:

Accessing File Systems from Multiple Hosts Using Oracle HSM Software

Oracle HSM makes file systems available to multiple hosts by configuring a server and one or more clients that all mount the file system simultaneously. File data is then passed directly from the disk devices to the hosts via high-performance, local-path I/O, without the network and intermediate server latencies associated with NFS and CIFS share. Only one host can be active as a metadata server at any one time, but any number of clients can be configured as potential metadata servers for redundancy purposes. There is no limit to the number of file-system mount points.

Oracle HSM supports multi-host access to both high-performance (ma) and general-purpose (ms) file systems in both multi-reader/single-writer and shared configurations, with or without archiving. There are only a few limitations:

  • Block (b–) special files are not supported.

  • Character (c–) special files are not supported.

  • FIFO named pipe (p–) special files are not supported.

  • Segmented files are not supported.

  • Mandatory locks are not supported.

    An EACCES error is returned if a mandatory lock is set. Advisory locks are supported, however. For more information about advisory locks, see the fcntl man page.

Oracle HSM software hosts can access file system data using either of two configurations, each with its own advantages and limitations in any given application.

In a multi-reader, single-writer configuration, a single host mounts the file system with read/write access and all other hosts mount it read-only. Configuration is a simple matter of setting mount options. Since a single host makes all changes to the files, file consistency and data integrity are insured, without additional file locking or consistency checks. All hosts read metadata as well as data directly from the disk for best performance. But all hosts must have access to file-system metadata, so all hosts in an ma file system must have access to both data and metadata devices.

In a shared configuration, all hosts can read, write, and append file data, using leases that allow a single host to access files in a given way for a given period of time. The metadata server issues read, write, and append leases and manages renewals and conflicting lease requests. Shared file systems offer great flexibility, but configuration is a bit more complex and there is more file-system overhead. All hosts read file data directly from disk, but clients access metadata over the network. So clients that lack access to metadata devices can share an ma file system.

To configure access to data from multiple Oracle HSM hosts, select the desired approach and see either "Configuring an Oracle HSM Single-Writer, Multiple-Reader File System" or "Configuring an Oracle HSM Shared File System".

Configuring an Oracle HSM Single-Writer, Multiple-Reader File System

To configure a single-writer, multiple-reader file system, carry out the following tasks:

Create the File System on the Writer

Proceed as follows:

  1. Log in to the host that will serve as the writer using the root account.

    In the example, the writer host is named mds-write:

    root@mds-write:~# 
    
  2. On the host that will serve as the writer, open the /etc/opt/SUNWsamfs/mcf file in a text editor, and add a QFS file system. You can configure either a general-purpose ms or high-performance ma file system.

    On an ma file system with separate metadata devices, configure the metadata server for the file system as the writer. In the example below, we edit the mcf file on the host mds-write using the vi text editor. The example specifies an ma file system with the equipment identifier and family set name swfs1 and the equipment ordinal number 300:

    root@mds-write:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment          Equipment  Equipment  Family      Device   Additional
    # Identifier         Ordinal    Type       Set         State    Parameters
    #------------------  ---------  ---------  ---------   ------   ---------------
    swfs1                300        ma         swfs1  on       
    /dev/dsk/c0t0d0s0    301        mm         swfs1  on
    /dev/dsk/c0t3d0s0    302        mr         swfs1  on
    /dev/dsk/c0t3d0s1    303        mr         swfs1  on
    
  3. Save the /etc/opt/SUNWsamfs/mcf file, and quit the editor.

    In the example, we save the changes and exit the vi editor:

    # Equipment          Equipment  Equipment  Family      Device   Additional
    # Identifier         Ordinal    Type       Set         State    Parameters
    #------------------  ---------  ---------  ---------   ------   ---------------
    swfs1                300        ma         swfs1  on  
    /dev/dsk/c0t0d0s0    301        mm         swfs1  on
    /dev/dsk/c0t3d0s0    302        mr         swfs1  on
    /dev/dsk/c0t3d0s1    303        mr         swfs1  on
    :wq
    root@mds-write:~# 
    
  4. Check the mcf file for errors by running the sam-fsd command, and correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:

    root@mds-write:~# sam-fsd
    ...
    Would start sam-stagerd()
    Would start sam-amld()
    root@mds-write:~# 
    
  5. Tell the Oracle HSM service to re-read the mcf file and reconfigure itself accordingly. Use the command samd config.

    root@mds-write:~# samd config
    Configuring SAM-FS
    root@mds-write:~# 
    
  6. Create the file system using the sammkfs command and the family set name of the file system, as described in "Configure a High-Performance ma File System".

    In the example, the command creates the single-writer/multi-reader file system swfs1:

    root@mds-write:~# sammkfs swfs1
    Building 'swfs1' will destroy the contents of devices:
      /dev/dsk/c0t0d0s0
      /dev/dsk/c0t3d0s0
      /dev/dsk/c0t3d0s1
    Do you wish to continue? [y/N]yes ...
    
  7. Back up the operating system's /etc/vfstab file.

    root@mds-write:~# cp /etc/vfstab /etc/vfstab.backup
    root@mds-write:~# 
    
  8. Add the new file system to the operating system's /etc/vfstab file, as described in "Configure a High-Performance ma File System".

    In the example, we open the /etc/vfstab file in the vi text editor and add a line for the swfs1 family set device:

    root@mds-write:~# vi /etc/vfstab
    #File
    #Device    Device   Mount           System  fsck  Mount    Mount
    #to Mount  to fsck  Point           Type    Pass  at Boot  Options
    #--------  -------  --------        ------  ----  -------  -------------------
    /devices   -        /devices        devfs   -     no       -
    /proc      -        /proc           proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1      samfs   -     no
           
    
  9. In the Mount Options column of the /etc/vfstab file, enter the writer mount option.

    Caution:

    Make sure that only one host is the writer at any given time. Allowing more than one host to mount a multiple-reader, single-writer file system using the writer option can corrupt the file system!
    #File
    #Device    Device   Mount           System  fsck  Mount    Mount
    #to Mount  to fsck  Point           Type    Pass  at Boot  Options
    #--------  -------  --------        ------  ----  -------  -------------------
    /devices   -        /devices        devfs   -     no       -
    /proc      -        /proc           proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1      samfs   -     no       writer
    
  10. Make any other desired changes to the /etc/vfstab file. Add mount options using commas as separators.

    For example, to mount the file system in the background if the first attempt does not succeed, add the bg mount option to the Mount Options field (see the mount_samfs man page for a comprehensive list of available mount options):

    #File
    #Device    Device   Mount           System  fsck  Mount    Mount
    #to Mount  to fsck  Point           Type    Pass  at Boot  Options
    #--------  -------  --------        ------  ----  -------  -------------------
    /devices   -        /devices        devfs   -     no       -
    /proc      -        /proc           proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1      samfs   -     no       writer,bg
    
  11. Save the /etc/vfstab file, and quit the editor.

    #File
    #Device    Device   Mount           System  fsck  Mount    Mount
    #to Mount  to fsck  Point           Type    Pass  at Boot  Options
    #--------  -------  --------        ------  ----  -------  -------------------
    /devices   -        /devices        devfs   -     no       -
    /proc      -        /proc           proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1      samfs   -     no       writer,bg      
    :wq
    root@mds-write:~# 
    
  12. Create the mount point specified in the /etc/vfstab file, and set the access permissions for the mount point.

    The mount-point permissions must be the same on all hosts, and users must have execute (x) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hsm/swfs1 mount-point directory and set permissions to 755 (-rwxr-xr-x):

    root@mds-write:~# mkdir /hsm/swfs1
    root@mds-write:~# chmod 755 /hsm/swfs1
    root@mds-write:~# 
    
  13. Mount the new file system:

    root@mds-write:~# mount /hsm/swfs1
    root@mds-write:~# 
    
  14. Once the shared file system has been created, configure the readers.

Configure the Readers

A reader is a host that mounts a file system read-only. For each host that you are configuring as a reader, proceed as follows:

  1. Log in to the host as root.

    In the example, the reader host is named reader1]:

    root@reader1:~# 
    
  2. In a terminal window, retrieve the configuration information for the multiple-reader, single-writer file system using the samfsconfig device-path command, where device-path is the location where the command should start to search for file-system disk devices (such as /dev/dsk/*).

    The samfsconfig utility retrieves file-system configuration information by reading the identifying superblock that sammkfs writes on each device that is included in an Oracle HSM file system. The command returns the correct paths to each device in the configuration starting from the current host and flags devices that cannot be reached (for full information on command syntax and parameters, see the samfsconfig man page).

    In the example, the samfsconfig output shows the same equipment listed in the mcf file on mds-write, except that the paths to the devices are specified starting from the host swfs1-reader1:

    root@reader1:~# samfsconfig /dev/dsk/*
    # Family Set 'swfs1' Created Thu Nov 21 07:17:00 2013
    # Generation 0 Eq count 4 Eq meta count 1
    #
    shrfs                300        ma         shrfs   -        
    /dev/dsk/c1t0d0s0    301        mm         shrfs   -
    /dev/dsk/c1t3d0s0    302        mr         shrfs   -
    /dev/dsk/c1t3d0s1    303        mr         shrfs   -
    
  3. Copy the entries for the shared file system from the samfsconfig output. Then, in a second window, open the file /etc/opt/SUNWsamfs/mcf in a text editor, and paste the copied entries into the file.

    Alternatively, you could redirect the output of samfsconfig to the mcf file. Or you could use the samd buildmcf command to run samfsconfig and create the client mcf file automatically.

    In the example, the mcf file for the host, swfs1-reader1 looks like this once we add the commented out column headings:

    root@reader1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300        ma         shrfs    -        
    /dev/dsk/c1t0d0s0     301        mm         shrfs    -
    /dev/dsk/c1t3d0s0     302        mr         shrfs    -
    /dev/dsk/c1t3d0s1     303        mr         shrfs    -
    
  4. Make sure that the Device State field is set to on for all devices. Then save the mcf file.

    root@reader1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300        ma         shrfs      on       
    /dev/dsk/c1t0d0s0     301        mm         shrfs      on 
    /dev/dsk/c1t3d0s0     302        mr         shrfs      on 
    /dev/dsk/c1t3d0s1     303        mr         shrfs      on 
    :wq
    root@reader1:~# 
    
  5. Check the mcf file for errors by running the sam-fsd command, and correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:

    root@reader1:~# sam-fsd
    ...
    Would start sam-stagerd()
    Would start sam-amld()
    root@reader1:~# 
    
  6. Back up the operating system's /etc/vfstab file.

    root@reader1:~# cp /etc/vfstab /etc/vfstab.backup
    root@reader1:~# 
    
  7. Add the single-writer, multiple-reader file system to the host operating system's /etc/vfstab file.

    In the example, we open the /etc/vfstab file in the vi text editor and add a line for the swfs1 family set device:

    root@reader1:~# vi /etc/vfstab
    #File
    #Device    Device   Mount           System  fsck  Mount    Mount
    #to Mount  to fsck  Point           Type    Pass  at Boot  Options
    #--------  -------  --------        ------  ----  -------  -------------------
    /devices   -        /devices        devfs   -     no       -
    /proc      -        /proc           proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1      samfs   -     no        
        
    
  8. In the Mount Options column of the /etc/vfstab file, enter the reader option.

    Caution:

    Make sure that the host mounts the file system using the reader option! Inadvertently using the writer mount option on more than one host can corrupt the file system!
    #File
    #Device    Device   Mount           System  fsck  Mount    Mount
    #to Mount  to fsck  Point           Type    Pass  at Boot  Options
    #--------  -------  --------        ------  ----  -------  -------------------
    /devices   -        /devices        devfs   -     no       -
    /proc      -        /proc           proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1      samfs   -     no       reader      
    
  9. Add any other desired mount options using commas as separators, and make any other desired changes to the /etc/vfstab file. Then save the /etc/vfstab file.

    #File
    #Device    Device   Mount       System  fsck  Mount    Mount
    #to Mount  to fsck  Point       Type    Pass  at Boot  Options
    #--------  -------  --------    ------  ----  -------  -----------------------
    /devices   -        /devices    devfs   -     no       -
    /proc      -        /proc       proc    -     no       -
    ...
    swfs1      -        /hsm/swfs1  samfs   -     no       writer,bg      
    :wq
    root@reader1:~# 
    
  10. Create the mount point specified in the /etc/vfstab file, and set the access permissions for the mount point.

    The mount-point permissions must be the same on the on all hosts, and users must have execute (x) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hsm/swfs1 mount-point directory and set permissions to 755 (-rwxr-xr-x), just as we did on the writer host:

    root@reader1:~# mkdir /hsm/swfs1
    root@reader1:~# chmod 755 /hsm/swfs1
    root@reader1:~# 
    
  11. Mount the new file system:

    root@reader1:~# mount /hsm/swfs1
    root@reader1:~# 
    
  12. Repeat this procedure until all reader hosts have been configured to mount the file system read-only.

  13. If you plan on using the sideband database feature, go to "Configuring the Reporting Database".

  14. Otherwise, go to "Configuring Notifications and Logging".

Configuring an Oracle HSM Shared File System

Oracle HSM shared file systems give multiple Oracle HSM hosts read, write, and append access to files. All hosts mount the file system and have direct connections to the storage devices. In addition, one host, the metadata server (MDS), has exclusive control over file-system metadata and mediates between hosts seeking access to the same files. The server provides client hosts with metadata updates via an Ethernet local network and controls file access by issuing, renewing, and revoking read, write, and append leases. Both non-archiving and archiving file systems of either the high-performance ma or general-purpose ms type can be shared.

To configure a shared file system, carry out the following tasks:

Configuring Metadata Servers for Use with a Shared File System

To configure a metadata server to support a shared file system, carry out the tasks listed below:

Create a Hosts File on Active and Potential Metadata Servers

On the active and potential metadata servers, you must create a hosts file that lists network address information for the servers and clients of a shared file system. The hosts file is stored alongside the mcf file in the /etc/opt/SUNWsamfs/ directory. During the initial creation of a shared file system, the sammkfs -S command configures sharing using the settings stored in this file. So create it now, using the procedure below.

  1. Log in to the server as root.

    In the example, the server is named mds1:

    root@mds1:~# 
    
  2. Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.family-set-name on the metadata server, replacing family-set-name with the name of the family-set name of the file-system that you intend to share.

    In the example, we create the file hosts.shrfs using the vi text editor. We add some optional headings, starting each line with a hash sign (#), indicating a comment:

    root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs
    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    
  3. Add the hostname and IP address or domain name of the of the metadata server in two columns, separated by whitespace characters.

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117
    
  4. Add a third column, separated from the network address by whitespace characters. In this column, enter 1, the ordinal number for the active metadata server.

    In this example, there is only one metadata server, so we enter 1:

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117           1      
     
    
  5. Add a fourth column, separated from the network address by whitespace characters. In this column, enter 0 (zero).

    A 0, - (hyphen), or blank value in the fourth column indicates that the host is on—configured with access to the shared file system. A 1 (numeral one) indicates that the host is off—configured but without access to the file system (for information on using these values when administering shared file systems, see the shrfs man page).

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117            1        0    
    
  6. Add a fifth column, separated from the network address by whitespace characters. In this column, enter the keyword server to indicate the currently active metadata server:

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117           1        0    server 
    
  7. If you plan to include one or more hosts as a potential metadata servers, create an entry for each. Increment the server ordinal each time. But do not include the server keyword (there can be only one active metadata server per file system).

    In the example, the host mds2 is a potential metadata server with the server ordinal 2:

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117           1        0    server 
    mds2                  10.79.213.217           2        0   
    
  8. Add a line for each client host, each with a server ordinal value of 0.

    A server ordinal of 0 identifies the host as a client. In the example, we add two clients, clnt1 and clnt2.

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117           1        0    server 
    mds2                  10.79.213.217           2        0   
    clnt1                 10.79.213.133           0        0
    clnt2                 10.79.213.147           0        0
    
  9. Save the /etc/opt/SUNWsamfs/hosts.family-set-name file, and quit the editor.

    In the example, we save the changes to /etc/opt/SUNWsamfs/hosts.shrfs and exit the vi editor:

    # /etc/opt/SUNWsamfs/hosts.shrfs
    #                                             Server   On/  Additional
    #Host Name            Network Interface       Ordinal  Off  Parameters
    #------------------   ----------------------  -------  ---  ----------
    mds1                  10.79.213.117           1        0    server 
    mds2                  10.79.213.217           2        0   
    clnt1                 10.79.213.133           0        0
    clnt2                 10.79.213.147           0        0
    :wq
    root@mds1:~# 
    
  10. Place a copy of the new /etc/opt/SUNWsamfs/hosts.family-set-name file on any potential metadata servers that are included in the shared file-system configuration.

  11. Now create the shared file system on the active metadata server.

Create the Shared File System on the Active Server

Proceed as follows:

  1. Log in to the server as root.

    In the example, the server is named mds1:

    root@mds1:~# 
    
  2. On the metadata server (MDS), open the /etc/opt/SUNWsamfs/mcf file in a text editor and add a QFS file system. You can configure either a general-purpose ms or high-performance ma file system.

    In the example below, we edit the mcf file on the host mds1 using the vi text editor. The example specifies an ma file system with the equipment identifier and family set name shrfs and the equipment ordinal number 300:

    root@mds1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300        ma         shrfs      on       
    /dev/dsk/c0t0d0s0     301        mm         shrfs      on
    /dev/dsk/c0t3d0s0     302        mr         shrfs      on
    /dev/dsk/c0t3d0s1     303        mr         shrfs      on
    
  3. In the Additional Parameters field of the row for the ma file-system equipment, enter the shared parameter:

    # Equipment           Equipment   Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300         ma        shrfs      on       shared
    /dev/dsk/c0t0d0s0     301         mm        shrfs      on
    /dev/dsk/c0t3d0s0     302         mr        shrfs      on
    /dev/dsk/c0t3d0s1     303         mr        shrfs      on
    
  4. Save the /etc/opt/SUNWsamfs/mcf file, and quit the editor.

    In the example, we save the changes and exit the vi editor:

    shrfs                 300         ma         shrfs     on      shared
    /dev/dsk/c0t0d0s0     301         mm         shrfs     on
    /dev/dsk/c0t3d0s0     302         mr         shrfs     on
    /dev/dsk/c0t3d0s1     303         mr         shrfs     on
    :wq
    root@mds1:~# 
    
  5. Check the mcf file for errors by running the sam-fsd command, and correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:

    root@mds1:~# sam-fsd
    ...
    Would start sam-stagerd()
    Would start sam-amld()
    root@mds1:~# 
    
  6. Tell the Oracle HSM service to reread the mcf file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.

    root@mds1:~# samd config
    root@mds1:~# 
    
  7. Create the file system using the sammkfs -S command and the family set name of the file system, as described in "Configure a High-Performance ma File System".

    The sammkfs command reads the hosts.family-set-name and mcf files and creates a shared file system with the specified properties. In the example, the command reads the sharing parameters from the hosts.shrfs file and creates the shared file system shrfs:

    root@mds1:~# sammkfs -S shrfs
    Building 'shrfs' will destroy the contents of devices:
      /dev/dsk/c0t0d0s0
      /dev/dsk/c0t3d0s0
      /dev/dsk/c0t3d0s1
    Do you wish to continue? [y/N]yes ...
    root@mds1:~# 
    
  8. Next, mount the shared file system on the active metadata server.

Mount the Shared File System on the Active Server
  1. Log in to the server as root.

    In the example, the server is named mds1:

    root@mds1:~# 
    
  2. Back up the operating system's /etc/vfstab file.

    root@mds1:~# cp /etc/vfstab /etc/vfstab.backup
    root@mds1:~# 
    
  3. Add the new file system to the operating system's /etc/vfstab file, as described in "Configure a High-Performance ma File System".

    In the example, we open the /etc/vfstab file in the vi text editor and add a line for the shrfs family set device:

    root@mds1:~# vi /etc/vfstab
    #File
    #Device    Device   Mount          System  fsck  Mount    Mount
    #to Mount  to fsck  Point          Type    Pass  at Boot  Options
    #--------  -------  --------       ------  ----  -------  --------------------
    /devices   -        /devices       devfs   -     no       -
    /proc      -        /proc          proc    -     no       -
    ...
    shrfs      -        /hsm/shrfs     samfs   -     no     
    
  4. In the Mount Options column, enter the shared option:

    #File
    #Device    Device   Mount          System  fsck  Mount    Mount
    #to Mount  to fsck  Point          Type    Pass  at Boot  Options
    #--------  -------  --------       ------  ----  -------  --------------------
    /devices   -        /devices       devfs   -     no       -
    /proc      -        /proc          proc    -     no       -
    ...
    shrfs      -        /hsm/shrfs     samfs   -     no       shared      
    
  5. Make any other desired changes to the /etc/vfstab file.

    For example, to retry mounting the file system in the background if the initial attempt does not succeed, add the bg mount option to the Mount Options field (for a full description of available mount options, see the mount_samfs man page):

    #File
    #Device    Device   Mount          System  fsck  Mount    Mount
    #to Mount  to fsck  Point          Type    Pass  at Boot  Options
    #--------  -------  --------       ------  ----  -------  --------------------
    /devices   -        /devices       devfs   -     no       -
    /proc      -        /proc          proc    -     no       -
    ...
    shrfs      -        /hsm/shrfs     samfs   -     no       shared,bg     
     
    
  6. Save the /etc/vfstab file, and quit the editor.

    #File
    #Device    Device   Mount          System  fsck  Mount    Mount
    #to Mount  to fsck  Point          Type    Pass  at Boot  Options
    #--------  -------  --------       ------  ----  -------  --------------------
    /devices   -        /devices       devfs   -     no       -
    /proc      -        /proc          proc    -     no       -
    ...
    shrfs      -        /hsm/shrfs     samfs   -     no       shared,bg      
    :wq
    root@mds1:~# 
    
  7. Create the mount point specified in the /etc/vfstab file, and set the access permissions for the mount point.

    The mount-point permissions must be the same on the metadata server and on all clients, and users must have execute (x) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hsm/shrfs mount-point directory and set permissions to 755 (-rwxr-xr-x):

    root@mds1:~# mkdir /hsm/shrfs
    root@mds1:~# chmod 755 /hsm/shrfs
    root@mds1:~# 
    
  8. Mount the new file system:

    root@mds1:~# mount /hsm/shrfs
    root@mds1:~# 
    
  9. If your hosts are configured with multiple network interfaces, you may want to use local hosts files to route network communications.

  10. Otherwise, once the shared file system has been created on the metadata server, configure file system clients for sharing.

Configuring File System Clients for a Shared File System

Clients include both hosts that are configured purely as clients and those that are configured as potential metadata servers. In most respects, configuring a client is much the same as configuring a server. Each client includes exactly the same devices as the server. Only the mount options and the exact path to the devices changes (controller numbers are assigned by each client host and may thus vary).

To configure one or more clients to support a shared file system, carry out the tasks listed below:

Create the Shared File System on the Solaris Clients

For each client, proceed as follows:

  1. On the client, log in as root.

    In the example, the server is named clnt1:

    root@clnt1:~# 
    
  2. In a terminal window, enter the command samfsconfig device-path, where device-path is the location where the command should start to search for file-system disk devices (such as /dev/dsk/* or /dev/zvol/dsk/*).

    The samfsconfig command retrieves the configuration information for the shared file system.

    root@clnt1:~# samfsconfig /dev/dsk/*
    
  3. If the host has access to the metadata devices for the file system and is thus suitable for use as a potential metadata server, the samfsconfig output closely resembles the mcf file that you created on the file-system metadata server.

    In our example, host clnt1 has access to the metadata devices (equipment type mm), so the command output shows the same equipment listed in the mcf file on the server, mds1. Only the host-assigned device controller numbers differ:

    root@clnt1:~# samfsconfig /dev/dsk/*
    # Family Set 'shrfs' Created Thu Feb 21 07:17:00 2013
    # Generation 0 Eq count 4 Eq meta count 1
    #
    shrfs             300         ma         shrfs   - 
     /dev/dsk/c1t0d0s0   301         mm         shrfs   -
     /dev/dsk/c1t3d0s0   302         mr         shrfs   -
     /dev/dsk/c1t3d0s1   303         mr         shrfs   -
    
  4. If the host does not have access to the metadata devices for the file system, the samfsconfig command cannot find the metadata devices and thus cannot fit the Oracle HSM devices that it discovers into the file-system configuration. The command output lists Ordinal 0—the metadata device—under Missing Slices, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.

    In our example, host clnt2 has access to the data devices only. So the samfsconfig output looks like this:

    root@clnt2:~# samfsconfig /dev/dsk/*
    # Family Set 'shrfs' Created Thu Feb 21 07:17:00 2013
    #
    # Missing slices
    # Ordinal 0
    # /dev/dsk/c4t3d0s0    302         mr         shrfs   -
    # /dev/dsk/c4t3d0s1    303         mr         shrfs   -
    
  5. Copy the entries for the shared file system from the samfsconfig output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf file in a text editor, and paste the copied entries into the file.

    In our first example, the host, clnt1, has access to the metadata devices for the file system, so the mcf file starts out looking like this:

    root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   --------------
    shrfs                 300        ma         shrfs      -        
    /dev/dsk/c1t0d0s0     301        mm         shrfs      -
    /dev/dsk/c1t3d0s0     302        mr         shrfs      -
    /dev/dsk/c1t3d0s1     303        mr         shrfs      -
    

    In the second example, the host, clnt2, does not have access to the metadata devices for the file system, so the mcf file starts out looking like this:

    root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   --------------
    # /dev/dsk/c4t3d0s0    302         mr         shrfs   -
    # /dev/dsk/c4t3d0s1    303         mr         shrfs   -
    
  6. If the host has access to the metadata devices for the file system, add the shared parameter to the Additional Parameters field of the entry for the shared file system.

    In the example, the host, clnt1, has access to the metadata:

    root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   --------------
    shrfs                 300        ma         shrfs      -        shared
    /dev/dsk/c1t0d0s0     301        mm         shrfs      -
    /dev/dsk/c1t3d0s0     302        mr         shrfs      -
    /dev/dsk/c1t3d0s1     303        mr         shrfs      -
    
  7. If the host does not have access to the metadata devices for the file-system, add a line for the shared file system and include the shared parameter

    root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   --------------
    shrfs                 300        ma         shrfs      -        shared
    # /dev/dsk/c4t3d0s0     302        mr         shrfs      -       
    # /dev/dsk/c4t3d0s1     303        mr         shrfs      -
    
  8. If the host does not have access to the metadata devices for the file system, add a line for the metadata device. Set the Equipment Identifier field to nodev (no device) and set the remaining fields to exactly the same values as they have on the metadata server:

    root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   --------------
    shrfs                 300         ma        shrfs      on       shared
    nodev                 301         mm        shrfs      on 
    # /dev/dsk/c4t3d0s0     302         mr        shrfs    -
    # /dev/dsk/c4t3d0s1     303         mr        shrfs    -
    
  9. If the host does not have access to the metadata devices for the file system, uncomment the entries for the data devices.

    root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   --------------
    shrfs                 300        ma         shrfs      on       shared
    nodev                 301        mm         shrfs      on 
    /dev/dsk/c4t3d0s0     302        mr         shrfs      - 
    /dev/dsk/c4t3d0s1     303        mr         shrfs      - 
    
  10. Make sure that the Device State field is set to on for all devices, and save the mcf file.

    In our first example, the host, clnt1, has access to the metadata devices for the file system, so the mcf file ends up looking like this:

    root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300        ma         shrfs      on       shared
    /dev/dsk/c1t0d0s0     301        mm         shrfs      on 
    /dev/dsk/c1t3d0s0     302        mr         shrfs      on 
    /dev/dsk/c1t3d0s1     303        mr         shrfs      on 
    :wq
    root@clnt1:~# 
    

    In the second example, the host, clnt2, does not have access to the metadata devices for the file system, so the mcf file ends up looking like this:

    root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300        ma         shrfs      on       shared
    nodev                 301        mm         shrfs      on 
    /dev/dsk/c4t3d0s0     302        mr         shrfs      on 
    /dev/dsk/c4t3d0s1     303        mr         shrfs      on 
    :wq
    root@clnt2:~# 
    
  11. Check the mcf file for errors by running the sam-fsd command, and correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf file on clnt1, and it runs without errors:

    root@clnt1:~# sam-fsd
    ...
    Would start sam-stagerd()
    Would start sam-amld()
    root@clnt1:~# 
    
  12. At this point, if your hosts are configured with multiple network interfaces, you may want to use local hosts files to route network communications.

  13. Next, mount the shared file system on the Solaris clients.

Mount the Shared File System on the Solaris Clients

For each client, proceed as follows:

  1. On the Solaris client, log in as root.

    In the example, the server is named clnt1:

    root@clnt1:~# 
    
  2. Back up the operating system's /etc/vfstab file.

    root@clnt1:~# cp /etc/vfstab /etc/vfstab.backup
    root@clnt1:~# 
    
  3. Open the /etc/vfstab file in a text editor, and add a line for the shared file system.

    In the example, we open the file in the vi text editor and add a line for the shrfs family set device:

    root@clnt1:~# vi /etc/vfstab
    #File
    #Device    Device   Mount         System  fsck  Mount    Mount
    #to Mount  to fsck  Point         Type    Pass  at Boot  Options
    #--------  -------  --------      ------  ----  -------  ---------------------
    /devices   -        /devices      devfs   -     no       -
    /proc      -        /proc         proc    -     no       -
    ...
    shrfs      -        /hsm/shrfs    samfs   -     no   
    
  4. Add any other desired mount options using commas as separators, and make any other desired changes to the /etc/vfstab file. Then save the /etc/vfstab file.

    In the example, we add no mount options.

    #File
    #Device    Device   Mount         System  fsck  Mount    Mount
    #to Mount  to fsck  Point         Type    Pass  at Boot  Options
    #--------  -------  --------      ------  ----  -------  ---------------------
    /devices   -        /devices      devfs   -     no       -
    /proc      -        /proc         proc    -     no       -
    ...
    shrfs      -        /hsm/shrfs    samfs   -     no       -
    :wq
    root@clnt1:~# 
    
  5. Create the mount point specified in the /etc/vfstab file, and set the access permissions for the mount point.

    The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /shrfs mount-point directory and set permissions to 755 (-rwxr-xr-x):

    root@clnt1:~# mkdir /hsm/shrfs
    root@clnt1:~# chmod 755 /hsm/shrfs
    root@clnt1:~# 
    
  6. Mount the shared file system:

    root@clnt1:~# mount /hsm/shrfs
    root@clnt1:~# 
    
  7. If the shared file system includes Linux clients, create the shared file system on the Linux clients.

  8. If you are configuring an Oracle HSM shared archiving file system, go to your next task, "Configuring Archival Storage for a Shared File System".

  9. Otherwise, stop here. You have configured the Oracle HSM shared file system.

Create the Shared File System on the Linux Clients

For each client, proceed as follows:

  1. On the Linux client, log in as root.

    In the example, the Linux client host is named clntL:

    [root@clntL ~]# 
    
  2. In a terminal window, enter the command samfsconfig device-path, where device-path is the location where the command should start to search for file-system disk devices (such as /dev/*).

    The samfsconfig command retrieves the configuration information for the shared file system. Since Linux hosts do not have access to the metadata devices for the file system, the samfsconfig cannot find the metadata devices and thus cannot fit the Oracle HSM devices that it discovers into the file-system configuration. The command output lists Ordinal 0—the metadata device—under Missing Slices, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.

    In our example, the samfsconfig output for Linux host clntL looks like this:

    [root@clntL ~]# samfsconfig /dev/*
    # Family Set 'shrfs' Created Thu Feb 21 07:17:00 2013
    #
    # Missing slices
    # Ordinal 0
    # /dev/sda4            302         mr         shrfs   -
    # /dev/sda5            303         mr         shrfs   -
    
  3. Copy the entries for the shared file system from the samfsconfig output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf file in a text editor, and paste the copied entries into the file.

    In the example, the mcf file for the Linux the host, clntL, starts out looking like this:

    [root@clntL ~]# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    # /dev/sda4             302        mr         shrfs   -
    # /dev/sda5             303        mr         shrfs   -
    
  4. In the mcf file, insert a line for the shared file system, and include the shared parameter.

    [root@clntL ~]# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300        ma         shrfs      -        shared
    # /dev/sda4             302         mr         shrfs     -
    # /dev/sda5             303         mr         shrfs   -
    
  5. In the mcf file, insert lines for the file system's metadata devices. Since the Linux host does not have access to metadata devices, set the Equipment Identifier field to nodev (no device) and then set the remaining fields to exactly the same values as they have on the metadata server:

    [root@clntL ~]# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300         ma        shrfs      on       shared
    nodev                 301         mm        shrfs      on 
    # /dev/sda4            302         mr         shrfs      -
    # /dev/sda5            303         mr         shrfs      -
    
  6. In the mcf file, uncomment the entries for the data devices.

    [root@clntL ~]# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300         ma        shrfs      on       shared
    nodev                 301         mm        shrfs      on 
    /dev/sda4             302         mr        shrfs      -
    /dev/sda5             303         mr        shrfs      -
    
  7. Make sure that the Device State field is set to on for all devices, and save the mcf file.

    [root@clntL ~]# vi /etc/opt/SUNWsamfs/mcf
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   ---------------
    shrfs                 300         ma        shrfs      on       shared
    nodev                 301         mm        shrfs      on 
    /dev/sda4             302         mr        shrfs      on 
    /dev/sda5             303         mr        shrfs      on 
    :wq
    [root@clntL ~]# 
    
  8. Check the mcf file for errors by running the sam-fsd command, and correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf file on the Linux client, clntL:

    [root@clntL ~]# sam-fsd
    ...
    Would start sam-stagerd()
    Would start sam-amld()
    [root@clntL ~]# 
    
  9. Now, mount the shared file system on the Linux clients.

Mount the Shared File System on the Linux Clients

For each client, proceed as follows:

  1. On the Linux client, log in as root.

    In the example, the Linux client host is named clntL:

    [root@clntL ~]# 
    
  2. Back up the operating system's /etc/fstab file.

    [root@clntL ~]# cp /etc/fstab /etc/fstab.backup
    
  3. Open the /etc/fstab file in a text editor, and start a line for shared file system.

    In the example, after backing up the /etc/fstab file on clntL, we open the file in the vi text editor and add a line for the shrfs family set device:

    [root@clntL ~]# vi /etc/fstab
    #File
    #Device    Mount         System    Mount                    Dump      Pass
    #to Mount  Point         Type      Options                  Frequency Number
    #--------  -------       --------  -----------------------  --------- ------ 
    ...    
    /proc      /proc         proc     defaults 
    shrfs      /hsm/shrfs    samfs
    
  4. In the fourth column of the file, add the mandatory shared mount option.

    [root@clntL ~]# vi /etc/fstab
    #File
    #Device    Mount         System    Mount                    Dump      Pass
    #to Mount  Point         Type      Options                  Frequency Number
    #--------  -------       --------  -----------------------  --------- ------ 
    ...    
    /proc      /proc         proc     defaults 
    shrfs      /hsm/shrfs    samfs    shared
    
  5. In the fourth column of the file, add any other desired mount options using commas as separators.

    Linux clients support the following additional mount options:

    • rw, ro

    • retry

    • meta_timeo

    • rdlease, wrlease, aplease

    • minallocsz, maxallocsz

    • noauto, auto

    In the example, we add the option noauto:

    [root@clntL ~]# vi /etc/fstab
    #File
    #Device    Mount         System    Mount                    Dump      Pass
    #to Mount  Point         Type      Options                  Frequency Number
    #--------  -------       --------  -----------------------  --------- ------ 
    ...    
    /proc      /proc         proc     defaults 
    shrfs      /hsm/shrfs    samfs    shared,noauto
             
    
  6. Enter zero (0) in each of the two remaining columns in the file. Then save the /etc/fstab file.

    [root@clntL ~]# vi /etc/fstab
    #File
    #Device    Mount         System    Mount                    Dump      Pass
    #to Mount  Point         Type      Options                  Frequency Number
    #--------  -------       --------  -----------------------  --------- ------ 
    ...    
    /proc      /proc         proc     defaults 
    shrfs      /hsm/shrfs    samfs    shared,noauto              0         0        
    :wq
    [root@clntL ~]# 
    
  7. Create the mount point specified in the /etc/fstab file, and set the access permissions for the mount point.

    The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /hsm/shrfs mount-point directory and set permissions to 755 (-rwxr-xr-x):

    [root@clntL ~]# mkdir /hsm/shrfs
    [root@clntL ~]# chmod 755 /hsm/shrfs
    
  8. Mount the shared file system. Use the command mount mountpoint, where mountpoint is the mount point specified in the /etc/fstab file.

    As the example shows, the mount command generates a warning. This is normal and can be ignored:

    [root@clntL ~]# mount /hsm/shrfs
    Warning: loading SUNWqfs will taint the kernel: SMI license 
    See http://www.tux.org/lkml/#export-tainted for information 
    about tainted modules. Module SUNWqfs loaded with warnings 
    [root@clntL ~]# 
    
  9. If you are configuring an Oracle HSM shared archiving file system, go to your next task, "Configuring Archival Storage for a Shared File System"

  10. If you plan on using the sideband database feature, go to "Configuring the Reporting Database".

  11. Otherwise, go to "Configuring Notifications and Logging".

Use Local Hosts Files to Route Network Communications

Individual hosts do not require local hosts files. The file system identifies the active metadata server and the network interfaces of active and potential metadata servers for all file system hosts (see "Create a Hosts File on Active and Potential Metadata Servers"). But local hosts files can be useful when you need to selectively route network traffic between file-system hosts that have multiple network interfaces.

Each file-system host looks up the network interfaces for other hosts on the metadata server. Hostnames and IP addresses are listed in the global hosts file for the file system, /etc/opt/SUNWsamfs/hosts.family-set-name, where family-set-name is the family set number of the shared file system. Then the host looks for a local hosts file, /etc/opt/SUNWsamfs/hosts.family-set-name.local.

If there is no local hosts file, the host uses the interface addresses specified in the global hosts file. Hosts are used in the order specified by the global file.

If there is a local hosts file, the host compares it with the global file and uses only those interfaces that are listed in both files. Hosts are used in the order specified in the local file.

So, by using different addresses in each file, you can control the interfaces used by different hosts. To configure local hosts files, use the procedure outlined below:

  1. On each active and potential metadata server host, edit the global hosts file for the shared file system so that it routes server and host communications in the required way.

    For the examples in this section, the shared file system, shrfs2, includes an active metadata server, mds1, and one potential metadata server, mds2, each with two network interfaces. There are also two clients, clnt1 andclnt2.

    We want the active and potential metadata servers to communicate with each other via private network addresses and with the clients via hostnames that Domain Name Service (DNS) can resolve to addresses on the public, local area network (LAN).

    So we edit /etc/opt/SUNWsamfs/hosts.shrfs2, the file system's global host file. We specify private network interface addresses for the active and potential servers. But, for the clients, we supply the host names rather than addresses:

    root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs2
    # /etc/opt/SUNWsamfs/hosts.shrfs2
    #                                    Server   On/  Additional
    #Host Name        Network Interface  Ordinal  Off  Parameters
    #---------------  -----------------  -------  ---  ----------
    mds1              172.16.0.129       1        0    server 
    mds2              172.16.0.130       2        0   
    clnt1             clnt1              0        0
    clnt2             clnt2              0        0
    :wq
    root@mds1:~# 
    
  2. Create a local hosts file on each of the active and potential metadata servers, using the path and file name /etc/opt/SUNWsamfs/hosts.family-set-name.local, where family-set-name is the equipment identifier of the shared file system. Only include interfaces for the networks that you want the active and potential servers to use.

    In our example, we want the active and potential metadata servers to communicate with each other over the private network, so the local hosts file on each server, hosts.shrfs2.local, lists private addresses for only two hosts, the active and the potential metadata servers:

    root@mds1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs2.local
    # /etc/opt/SUNWsamfs/hosts.shrfs2 on mds1
    #                                    Server   On/  Additional
    #Host Name        Network Interface  Ordinal  Off  Parameters
    #---------------  -----------------  -------  ---  ----------
    mds1              172.16.0.129       1        0    server 
    mds2              172.16.0.130       2        0   
    :wq
    root@mds1:~# ssh root@mds2
    Password:
    
    root@mds2:~# vi /etc/opt/SUNWsamfs/hosts.shrfs2.local
    # /etc/opt/SUNWsamfs/hosts.shrfs2.local on mds2
    #                                    Server   On/  Additional
    #Host Name        Network Interface  Ordinal  Off  Parameters
    #---------------  -----------------  -------  ---  ----------
    mds1              172.16.0.129       1        0    server 
    mds2              172.16.0.130       2        0   
    :wq
    root@mds2:~# exit
    root@mds1:~# 
    
  3. Create a local hosts file on each of the clients, using the path and file name /etc/opt/SUNWsamfs/hosts.family-set-name.local, where family-set-name is the equipment identifier of the shared file system. Only include interfaces for the networks that you want the clients to use.

    In our example, we want the clients to communicate with the server only via the public network. So the files include hostnames for only two hosts, the active and potential metadata servers:

    root@mds1:~# ssh root@clnt1
    Password:
    root@clnt1:~# vi /etc/opt/SUNWsamfs/hosts.shrfs2.local
    # /etc/opt/SUNWsamfs/hosts.shrfs2.local on clnt1 
    #                                    Server   On/  Additional
    #Host Name        Network Interface  Ordinal  Off  Parameters
    #---------------  -----------------  -------  ---  ----------
    mds1              mds1               1        0    server 
    mds2              mds2               2        0   
    :wq
    root@clnt1:~# exit
    root@mds1:~# ssh root@clnt2
    Password:
    
    root@clnt2:~# vi /etc/opt/SUNWsamfs/hosts.shrfs2.local
    # /etc/opt/SUNWsamfs/hosts.shrfs2.local on clnt2
    #                                    Server   On/  Additional
    #Host Name        Network Interface  Ordinal  Off  Parameters
    #---------------  -----------------  -------  ---  ----------
    mds1              mds1               1        0    server 
    mds2              mds2               2        0   
    :wq
    root@clnt2:~# exit
    root@mds1:~# 
    
  4. If you started this procedure while finishing the configuration of the server, go to "Mount the Shared File System on the Active Server".

  5. If you started this procedure while configuring a client, you should now "Mount the Shared File System on the Solaris Clients".

Configuring Archival Storage for a Shared File System

To set up the archival storage for an archiving Oracle HSM shared file system, carry out the following tasks:

Connect Tape Drives to Server and Datamover Hosts Using Persistent Bindings

In a shared archiving file system, all potential metadata servers must have access to the library and tape drives. If you decide to distribute tape I/O across the hosts of the shared archiving file system, one or more clients will also need access to drives. So you must configure each of these hosts to address each of the drives in a consistent way.

The Solaris operating system attaches drives the system device tree in the order in which it discovers the devices at startup. This order may or may not reflect the order in which devices are discovered by other file system hosts or the order in which they are physically installed in the removable media library. So you need to persistently bind the devices to each host in the same way that they are bound to the other hosts and in the same order in which they are installed in the removable media library.

The procedure below outlines the required steps (for full information on creating persistent bindings, see the Solaris devfsadm and devlinks man pages and the administration documentation for your version of the Solaris operating system):

  1. Log in to the active metadata server as root.

    root@mds1:~# 
    
  2. If you do not know the current physical order of the drives in the library, create a mapping file as described in "Determine the Order in Which Drives are Installed in the Library".

    In the example, the device-mappings.txt file looks like this:

    LIBRARY SOLARIS          SOLARIS 
    DEVICE  LOGICAL          PHYSICAL
    NUMBER  DEVICE           DEVICE
    ------- -------------    --------------------------------------------------
       2    /dev/rmt/0cbn -> ../../devices/pci@8,.../st@w500104f00093c438,0:cbn
       1    /dev/rmt/1cbn -> ../../devices/pci@8,.../st@w500104f0008120fe,0:cbn
       3    /dev/rmt/2cbn -> ../../devices/pci@8,.../st@w500104f000c086e1,0:cbn
       4    /dev/rmt/3cbn -> ../../devices/pci@8,.../st@w500104f000b6d98d,0:cbn
    
  3. Open the /etc/devlink.tab file in a text editor.

    In the example, we use the vi editor:

    root@mds1:~# vi /etc/devlink.tab
    # Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved.
    # This is the table used by devlinks
    # Each entry should have 2 fields; but may have 3.  Fields are separated
    # by single tab ('\t') characters.
    ...
    
  4. Using the device-mappings.txt file as a guide, add a line to the /etc/devlink.tab file that remaps a starting node in the Solaris tape device tree, rmt/node-number, to the first drive in the library. Enter the line in the form type=ddi_byte:tape; addr=device_address,0; rmt/node-number\M0, where device_address is the physical address of the device and node-number is a position in the Solaris device tree that is high enough to avoid conflicts with any devices that Solaris configures automatically (Solaris starts from node 0).

    In the example, we note the device address for the first device in the library, 1, w500104f0008120fe, and see that the device is currently attached to the host at rmt/1:

    root@mds1:~# vi /root/device-mappings.txt 
    LIBRARY SOLARIS          SOLARIS 
    DEVICE  LOGICAL          PHYSICAL
    NUMBER  DEVICE           DEVICE
    ------- -------------    --------------------------------------------------
       2    /dev/rmt/0cbn -> ../../devices/pci@8,.../st@w500104f00093c438,0:cbn
       1    /dev/rmt/1cbn -> ../../devices/pci@8,.../st@w500104f0008120fe,0:cbn
       3    /dev/rmt/2cbn -> ../../devices/pci@8,.../st@w500104f000c086e1,0:cbn
       4    /dev/rmt/3cbn -> ../../devices/pci@8,.../st@w500104f000b6d98d,0:cbn
    

    So we create a line in /etc/devlink.tab that remaps the non-conflicting node rmt/60 to the number 1 drive in the library, w500104f0008120fe:

    root@mds1:~# vi /etc/devlink.tab
    # Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved.
    ...
    type=ddi_byte:tape;addr=w500104f0008120fe,0;    rmt/60\M0
    :w
    
  5. Continue to add lines to the /etc/devlink.tab file for each tape device that is assigned for Oracle HSM archiving, so that the drive order in the device tree on the metadata server matches the installation order on the library. Save the file.

    In the example, we note the order and addresses of the three remaining devices—library drive 2 at w500104f00093c438, library drive 3 at w500104f000c086e1, and library drive 4 at w500104f000c086e1:

    root@mds1:~# vi /root/device-mappings.txt 
    ...
       2    /dev/rmt/0cbn -> ../../devices/pci@8,.../st@w500104f00093c438,0:cbn
       1    /dev/rmt/1cbn -> ../../devices/pci@8,.../st@w500104f0008120fe,0:cbn
       3    /dev/rmt/2cbn -> ../../devices/pci@8,.../st@w500104f000c086e1,0:cbn
       4    /dev/rmt/3cbn -> ../../devices/pci@8,.../st@w500104f000b6d98d,0:cbn
    

    Then we map the device addresses to the next three Solaris device nodes (rmt/61, rmt/62, and rmt/63), maintaining the same order as in the library:

    root@mds1:~# vi /etc/devlink.tab
    ...
    type=ddi_byte:tape;addr=w500104f0008120fe,0;    rmt/60\M0
    type=ddi_byte:tape;addr=w500104f00093c438,0;    rmt/61\M0
    type=ddi_byte:tape;addr=w500104f000c086e1,0;    rmt/62\M0
    type=ddi_byte:tape;addr=w500104f000b6d98d,0;    rmt/63\M0
    :wq
    root@mds1:~# 
    
  6. Delete all existing links to the tape devices in /dev/rmt.

    root@mds1:~# rm /dev/rmt/* 
    
  7. Create new, persistent tape-device links from the entries in the /etc/devlink.tab file. Use the command devfsadm -c tape.

    Each time that the devfsadm command runs, it creates new tape device links for devices specified in the /etc/devlink.tab file using the configuration specified by the file. The -c tape option restricts the command to creating new links for tape-class devices only:

    root@mds1:~# devfsadm -c tape
    
  8. Create the same persistent tape-device links on each potential metadata server and datamover in the shared file system configuration. Add the same lines to the /etc/devlink.tab file, delete the links in /dev/rmt, and run devfsadm -c tape.

    In the example, we have a potential metadata server, mds2, and a datamover client, clnt1. So we edit the /etc/devlink.tab files on each to match that on the active server, mds1. Then we delete the existing links in /dev/rmt on mds2 and clnt1, and run devfsadm -c tape on each:

    root@mds1:~# ssh root@mds2
    Password:
    root@mds2:~# vi /etc/devlink.tab
    ...
    type=ddi_byte:tape;addr=w500104f0008120fe,0;    rmt/60\M0
    type=ddi_byte:tape;addr=w500104f00093c438,0;    rmt/61\M0
    type=ddi_byte:tape;addr=w500104f000c086e1,0;    rmt/62\M0
    type=ddi_byte:tape;addr=w500104f000b6d98d,0;    rmt/63\M0
    :wq
    root@mds2:~# rm /dev/rmt/* 
    root@mds2:~# devfsadm -c tape
    root@mds2:~# exit
    root@mds1:~# ssh clnt1
    Password:
    root@clnt1:~# vi /etc/devlink.tab
    ...
    type=ddi_byte:tape;addr=w500104f0008120fe,0;    rmt/60\M0
    type=ddi_byte:tape;addr=w500104f00093c438,0;    rmt/61\M0
    type=ddi_byte:tape;addr=w500104f000c086e1,0;    rmt/62\M0
    type=ddi_byte:tape;addr=w500104f000b6d98d,0;    rmt/63\M0
    :wq
    root@clnt1:~# rm /dev/rmt/*  
    root@clnt1:~# devfsadm -c tape
    root@clnt1:~# exit
    root@mds1:~# 
    
  9. Now, configure the hosts of the archiving file system so that they can use the archival storage.

Configure the Hosts of the Archiving File System to Use the Archival Storage

For the active metadata server and each potential metadata server and datamover client, proceed as follows:

  1. Log in to the host as root.

    In the examples, we log in to a datamover client named datamvr:

    root@datamvr1:~# 
    
  2. Open the /etc/opt/SUNWsamfs/mcf file in a text editor.

    In the example, we use the vi editor.

    root@datamvr1:~# vi /etc/opt/SUNWsamfs/mcf 
    # Equipment           Equipment  Equipment  Family     Device   Additional
    # Identifier          Ordinal    Type       Set        State    Parameters
    #------------------   ---------  ---------  ---------  ------   -------------
    shrfs                 100         ms        shrfs      on
    /dev/dsk/c1t3d0s3     101         md        shrfs      on
    /dev/dsk/c1t3d0s4     102         md        shrfs      on
    ...
    
  3. Following the file system definitions in the /etc/opt/SUNWsamfs/mcf file, start a section for the archival storage equipment.

    In the example, we add some headings for clarity:

    root@datamvr1:~# vi /etc/opt/SUNWsamfs/mcf 
    ...
    # Archival storage for copies:
    #
    # Equipment              Equipment Equipment Family    Device Additional
    # Identifier             Ordinal   Type      Set       State  Parameters
    #----------------------- --------- --------- --------- ------ ----------------
    
  4. To add archival tape storage, start by adding an entry for the library. In the equipment identifier field, enter the device ID for the library and assign an equipment ordinal number:

    In this example, the library equipment identifier is /dev/scsi/changer/c1t0d5. We set the equipment ordinal number to 900, the range following the range chosen for our disk archive:

    # Archival storage for copies:
    #
    # Equipment              Equipment Equipment Family    Device Additional
    # Identifier             Ordinal   Type      Set       State  Parameters
    #----------------------- --------- --------- --------- ------ ----------------
    /dev/scsi/changer/c1t0d5 900
    
  5. Set the equipment type to rb, a generic SCSI-attached tape library, provide a name for the tape library family set, and set the device state on.

    In this example, we are using the library lib1 :

    # Archival storage for copies:
    #
    # Equipment              Equipment Equipment Family    Device Additional
    # Identifier             Ordinal   Type      Set       State  Parameters
    #----------------------- --------- --------- --------- ------ ----------------
    /dev/scsi/changer/c1t0d5 900       rb        lib1      on
    
  6. In the Additional Parameters column, you can enter an optional, user-defined path and name for the library catalog.

    The optional, non-default path cannot exceed 127 characters. In the example, we use the default path, var/opt/SUNWsamfs/catalog/, with the user-defined catalog file name lib1cat. Note that, due to document layout limitations, the example abbreviates the path:

    # Archival storage for copies:
    #
    # Equipment              Equipment Equipment Family   Device Additional
    # Identifier             Ordinal   Type      Set      State  Parameters
    #----------------------- --------- --------- -------- ------ ----------------
    /dev/scsi/changer/c1t0d5 900       rb        lib1     on     .../lib1cat
    
  7. Next, add an entry for each tape drive. Use the persistent equipment identifiers that we established in the procedure "Connect Tape Drives to Server and Datamover Hosts Using Persistent Bindings".

    # Archival storage for copies:
    #
    # Equipment              Equipment Equipment Family   Device Additional
    # Identifier             Ordinal   Type      Set      State  Parameters
    #----------------------- --------- --------- -------- ------ -----------------
    /dev/scsi/changer/c1t0d5  900       rb       lib1     on     .../lib1cat
    /dev/rmt/60cbn            901       tp       lib1     on
    /dev/rmt/61cbn            902       tp       lib1     on
    /dev/rmt/62cbn            903       tp       lib1     on
    /dev/rmt/63cbn            904       tp       lib1     on
    
  8. Finally, if you wish to configure an Oracle HSM historian yourself, add an entry using the equipment type hy. Enter a hyphen in the family-set and device-state columns and enter the path to the historian's catalog in additional-parameters column.

    The historian is a virtual library that catalogs volumes that have been exported from the archive. If you do not configure a historian, the software creates one automatically using the highest specified equipment ordinal number plus one.

    Note that the example abbreviates the path to the historian catalog for page-layout reasons. The full path is /var/opt/SUNWsamfs/catalog/historian_cat:

    # Archival storage for copies:
    #
    # Equipment              Equipment Equipment Family   Device Additional
    # Identifier             Ordinal   Type      Set      State  Parameters
    #----------------------- --------- --------- -------- ------ ----------------
    /dev/scsi/changer/c1t0d5 900       rb        lib1      on    ...catalog/lib1cat
    /dev/rmt/60cbn           901       tp        lib1      on
    /dev/rmt/61cbn           902       tp        lib1      on
    /dev/rmt/62cbn           903       tp        lib1      on
    /dev/rmt/63cbn           904       tp        lib1      on
    historian                999       hy        -         -     .../historian_cat
    
  9. Save the mcf file, and close the editor.

    ...
     /dev/rmt/3cbn            904       tp        lib1      on
    historian                999       hy         -      -      .../historian_cat
    :wq
    root@datamvr1:~# 
    
  10. Check the mcf file for errors by running the sam-fsd command. Correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error:

    root@datamvr1:~# sam-fsd
    ...
    Would start sam-stagealld()
    Would start sam-stagerd()
    Would start sam-amld()
    root@datamvr1:~# 
    
  11. Tell the Oracle HSM service to reread the mcf file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.

    root@datamvr1:~# samd config
    Configuring SAM-FS
    root@datamvr1:~# 
    
  12. Repeat this procedure until all active and potential metadata servers and all datamover clients have been configured to use the archival storage.

  13. If required, distribute tape I/O across the hosts of the shared archiving file system.

  14. If you plan on using the sideband database feature, go to "Configuring the Reporting Database".

  15. Otherwise, go to "Configuring Notifications and Logging".

Distribute Tape I/O Across the Hosts of the Shared Archiving File System

Starting with Oracle HSM Release 6.1.4, any client of a shared archiving file system that runs on Oracle Solaris 11 or higher can attach tape drives and carry out tape I/O on behalf of the file system. Distributing tape I/O across these datamover hosts greatly reduces server overhead, improves file-system performance, and allows significantly more flexibility when scaling Oracle HSM implementations. As your archiving needs increase, you now have the option of either replacing Oracle HSM metadata servers with more powerful systems (vertical scaling) or spreading the load across more clients (horizontal scaling).

To distribute tape I/O across shared file-system hosts, proceed as follows:

  1. Connect all devices that will be used for distributed I/O to the file system metadata server and to all file system clients that will handle tape I/O.

  2. If you have not already done so, use persistent bindings to connect tape drives to each client that will serve as a datamover. Then return here.

  3. Log in to the shared archiving file system's metadata server as root.

    In the example, the server's hostname is mds1:

    root@mds1:~# 
    
  4. Make sure that the metadata server is running Oracle Solaris 11 or higher.

    root@mds1:~# uname -r
    5.11
    root@mds1:~# 
    
  5. Make sure that all clients that serve as datamovers are running Oracle Solaris 11 or higher.

    In the example, we log in to client hosts clnt1 and clnt2 remotely using ssh and get the Solaris version from the log-in banner:

    root@mds1:~# ssh root@clnt1
    Password:
    Oracle Corporation      SunOS 5.11      11.1    September 2013
    root@clnt1:~# exit
    root@mds1:~# ssh root@clnt2
    Password:
    Oracle Corporation      SunOS 5.11      11.1    September 2013
    root@clnt2:~# exit
    root@mds1:~# 
    
  6. Calculate the amount of system memory that can be allocated as buffer space for each tape-drive in the distributed I/O configuration. Divide the total available memory by the number of drives and subtract a sensible safety margin:

    (total-memory bytes)/(drive-count drives) = memory bytes/drive 
    (memory bytes/drive) - (safe-margin bytes/drive) = buffsize bytes/drive
     
    

    Oracle HSM allocates a buffer for each drive used. So make sure that you do not inadvertently configure more buffer space than system memory can provide. In the example, we find that we can allocate no more than 224 kilobytes per drive. So we round down to 128 to allow a margin of safety.

    ((3584 kilobytes)/(16 drives)) = 224 kilobytes/drive
    buffsize = 128 kilobytes/drive
     
    
  7. Once you have calculated the size of the buffer that can be allocated to each drive, calculate an Oracle HSM device block size and a number of blocks that will fit in a buffer of the specified size.

    (number blocks/buffer)*block-size bytes/block/drive = buffersize bytes/drive
    

    Vary the number of blocks and the block size until the product of the two is less than or equal to the calculated buffer size. The number of blocks must be in the range [2-8192] In the example, we settle on two blocks of 64 kilobytes each per buffer:

    (2 blocks/buffer)*(64 kilobytes/block/drive) = 128 kilobytes/drive
    
  8. On the metadata server, open the /etc/opt/SUNWsamfs/archiver.cmd file in a text editor. On a new line in the general directives section at the top of the file, enter bufsize = media-type media-blocks, where:

    • media-type is the type code that the mcf file assigns to the drives and media used for distributed I/O.

    • media-blocks is the number of blocks per buffer that you calculated above.

    Save the file, and close the editor.

    In the example, we log in to the server mds1 and use the vi editor to add the line bufsize = ti 2, where ti is the media type for the Oracle StorageTek T10000 drives that we are using and 2 is the number of blocks per drive buffer that we calculated:

    root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
    #        archiver.cmd
    #-----------------------------------------------------------------------
    # General Directives
    archivemeta = off
    examine = noscan
    bufsize = ti 2
    :wq
    root@mds1:~# 
    
  9. On the metadata server, open the /etc/opt/SUNWsamfs/defaults.conf file in a text editor. For each media type that will participate in distributed I/O, enter a line of the form media-type_blksize =size where:

    • media-type is the type code that the mcf file assigns to the drives and media used for distributed I/O.

    • size is the block size that you calculated earlier in this procedure.

    By default, the device block size for StorageTek T10000 drives is 2 megabytes or 2048 kilobytes (ti_blksize = 2048). So, in the example, we override the default with block size that we calculated, 64 kilobytes:

    root@mds1:~# vi /etc/opt/SUNWsamfs/defaults.conf 
    # These are the defaults.  To change the default behavior, uncomment the
    # appropriate line (remove the '#' character from the beginning of the line)
    # and change the value.
    ...
    #li_blksize = 256
    ti_blksize = 64
    root@mds1:~# 
    
  10. While still in the /etc/opt/SUNWsamfs/defaults.conf file, uncomment the line #distio = off, if necessary, or add it if it is not present at all.

    By default, distio is off (disabled). In the example, we add the line distio = on:

    ...
    distio = on
    
  11. While still in the /etc/opt/SUNWsamfs/defaults.conf file, enable each device type that should participate in distributed I/O. On a new line, enter media-type _distio = on, where media-type is the type code that the mcf file assigns to drives and media.

    By default, StorageTek T10000 drives and LTO drives are allowed to participate in distributed I/O (ti_distio = on and li_distio = on), while all other types are excluded. In the example, we explicitly include StorageTek T10000 drives:

    ...
    distio = on
    ti_distio = on
    
  12. While still in the /etc/opt/SUNWsamfs/defaults.conf file, disable each device type that should not participate in distributed I/O. On a new line, enter media-type _distio = off, where media-type is the type code that the mcf file assigns to drives and media.

    In the example, we exclude LTO drives:

    ...
    distio = on
    ti_distio = on
    li_distio = off
    
  13. When you have finished editing the /etc/opt/SUNWsamfs/defaults.conf file, save the contents and close the editor.

    ...
    distio = on
    ti_distio = on
    li_distio = off
    :wq
    root@mds1:~# 
    
  14. On each client that will serve as a datamover, edit the defaults.conf file so that it matches the file on the server.

  15. On each client that will serve as a datamover, open the /etc/opt/SUNWsamfs/mcf file in a text editor, and update the file to include all of the tape devices that the metadata server is using for distributed tape I/O. Make sure that the device order and equipment numbers are identical to those in the mcf file on the metadata server.

    In the example, we use the vi editor to configure the mcf file on host clnt1:

    root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment              Equipment Equipment Family      Device Additional
    # Identifier             Ordinal   Type      Set         State  Parameters
    #----------------------- --------- --------- ----------  ------ --------------
    shrfs                    800       ms        shrfs       on
    ...
    # Archival storage for copies:
    /dev/rmt/60cbn           901       ti                    on
    /dev/rmt/61cbn           902       ti                    on
    /dev/rmt/62cbn           903       ti                    on
    /dev/rmt/63cbn           904       ti                    on
    
  16. If the tape library listed in the /etc/opt/SUNWsamfs/mcf file on the metadata server is configured on the client that will serve as a datamover, specify the library family set as the family set name for the tape devices that are being used for distributed tape I/O. Save the file.

    In the example, the library is configured on host clnt1, so we use the family set name lib1 for the tape devices

    root@clnt1:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment              Equipment Equipment Family      Device Additional
    # Identifier             Ordinal   Type      Set         State  Parameters
    #----------------------- --------- --------- ----------  ------ --------------
    shrfs                    800       ms        shrfs       on
    ...
    # Archival storage for copies:
    /dev/scsi/changer/c1t0d5 900       rb        lib1        on     .../lib1cat
    /dev/rmt/60cbn           901       ti        lib1        on
    /dev/rmt/61cbn           902       ti        lib1        on
    /dev/rmt/62cbn           903       ti        lib1        on
    /dev/rmt/63cbn           904       ti        lib1        on
    :wq
    root@clnt1:~# 
    
  17. If the tape library listed in the /etc/opt/SUNWsamfs/mcf file on the metadata server is not configured on the client that will serve as a datamover, use a hyphen (-) as the family set name for the tape devices that are being used for distributed tape I/O. Then save the file and close the editor.

    In the example, the library is not configured on host clnt2, so we use the hyphen as the family set name for the tape devices:

    root@clnt2:~# vi /etc/opt/SUNWsamfs/mcf
    # Equipment              Equipment Equipment Family      Device Additional
    # Identifier             Ordinal   Type      Set         State  Parameters
    #----------------------- --------- --------- ----------  ------ --------------
    shrfs                    800       ms        shrfs       on
    ...
    # Archival storage for copies:
    /dev/rmt/60cbn           901       ti        -           on
    /dev/rmt/61cbn           902       ti        -           on
    /dev/rmt/62cbn           903       ti        -           on
    /dev/rmt/63cbn           904       ti        -           on
    :wq
    root@clnt2:~# 
    
  18. If you need to enable or disable distributed tape I/O for particular archive set copies, log in to the server, open the /etc/opt/SUNWsamfs/archiver.cmd file in a text editor, and add the -distio parameter to the copy directive. Set -distio on to enable distributed I/O or -distio off to disable it. Save the file.

    In the example, we log in to the server mds1 and use the vi editor to turn distributed I/O off for copy 1:

    root@mds1:~# vi /etc/opt/SUNWsamfs/archiver.cmd
    #        archiver.cmd
    ...
    params
    allsets -sort path -offline_copy stageahead
    allfiles.1 -startage 10m -startsize 500M -startcount 500000 -distio off
    allfiles.2 -startage 24h -startsize 20G  -startcount 500000 -reserve set
    :wq
    root@mds1:~# 
    
  19. Check the configuration files for errors by running the sam-fsd command. Correct any errors found.

    The sam-fsd command reads Oracle HSM configuration files and initializes file systems. It will stop if it encounters an error. In the example, we run the command on the server, mds1:

    root@mds1:~# sam-fsd
    
  20. Tell the Oracle HSM service to read the modified configuration files and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.

    root@mds1:~# samd config
    
  21. To verify that distributed I/O has been successfully activated, use the command samcmd g. If the DATAMOVER flag appears in the output for the clients, distributed I/O has been successfully activated.

    In the example, the flag is present:

    root@mds1:~# samcmd g
    Shared clients samcmd 6.0.dist_tapeio 11:09:13 Feb 20 2014
    samcmd on mds1
    shrfs is shared, server is mds1, 2 clients 3 max
    ord hostname             seqno nomsgs status   config  conf1  flags
      1 mds1          14      0   8091  808540d   4051      0 MNT SVR
     
        config   :   CDEVID      ARCHIVE_SCAN    GFSID   OLD_ARCHIVE_FMT
        "        :   SYNC_META   TRACE   SAM_ENABLED     SHARED_MO
        config1  :   NFSV4_ACL   MD_DEVICES      SMALL_DAUS      SHARED_FS
        flags    :
        status   :   MOUNTED     SERVER  SAM     DATAMOVER
        last_msg :  Wed Jul  2 10:13:50 2014
     
      2 clnt1     127      0   a0a1  808540d   4041      0 MNT CLI
     
        config   :   CDEVID      ARCHIVE_SCAN    GFSID   OLD_ARCHIVE_FMT
        "        :   SYNC_META   TRACE   SAM_ENABLED     SHARED_MO
        config1  :   NFSV4_ACL   MD_DEVICES      SHARED_FS
        flags    :
        status   :   MOUNTED     CLIENT  SAM     SRVR_BYTEREV
        "        :   DATAMOVER
    ...
    
  22. If you plan on using the sideband database feature, go to "Configuring the Reporting Database".

  23. Otherwise, go to "Configuring Notifications and Logging".

Accessing File Systems from Multiple Hosts Using NFS and SMB/CIFS

Multiple hosts can access Oracle HSM file systems using Network File System (NFS) or Server Message Block (SMB)/Common Internet File System (CIFS) shares in place of or in addition to the Oracle HSM software's native support for multiple-host file-system access (see "Accessing File Systems from Multiple Hosts Using Oracle HSM Software"). The following sections outline the basic configuration steps:

Sharing Oracle HSM File Systems Using NFS

Carry out the following tasks:

Disable Delegation Before Using NFS 4 to Share an Oracle HSM Shared File System

If you use NFS to share an Oracle HSM shared file system, you need to make sure that the Oracle HSM software controls access to files without interference from NFS. This is not generally a problem, because, when the NFS server accesses files on behalf of its clients, it does so as a client of the Oracle HSM shared file system. Problems can arise, however, if NFS version-4 servers are configured to delegate control over read and write access to their clients. Delegation is attractive because the server only needs to intervene to head off potential conflicts. The server's workload is partially distributed across the NFS clients, and network traffic is reduced. But delegation grants access—particularly write access—independently of the Oracle HSM server, which also controls access from its own shared file-system clients. To prevent conflicts and potential file corruption, you need to disable delegation. Proceed as follows.

  1. Log in to the metadata server (MDS) host of the Oracle HSM file system that you want to configure as an NFS share. Log in as root.

    In the examples below, the server name is mds1.

    root@mds1:~# 
    
  2. If you are using NFS version 4 and the NFS server runs Solaris 11.1 or later, use the sharectl set -p command of the Service Management Facility (SMF) to turn the NFS server_delegation property off.

    root@mds1:~# sharectl set -p server_delegation=off
    
  3. If you are using NFS version 4 and the NFS server runs Solaris 11.0 or earlier, disable delegations by opening the /etc/default/nfs file in a text editor and setting the NFS_SERVER_DELEGATION parameter off. Save the file, and close the editor.

    In the example, we use the vi editor:

    root@mds1:~# vi /etc/default/nfs
    # ident "@(#)nfs        1.10    04/09/01 SMI"
    # Copyright 2004 Sun Microsystems, Inc.  All rights reserved.
    # Use is subject to license terms.
    ...
    NFS_SERVER_DELEGATION=off
    :wq
    root@mds1:~# 
    
  4. If the Oracle HSM file system that you intend to share supports the Write-Once Read-Many (WORM) feature, configure NFS servers and clients to share WORM files and directories now.

  5. Otherwise, configure the NFS server on the Oracle HSM host.

Configure NFS Servers and Clients to Share WORM Files and Directories

  1. Log in to the metadata server (MDS) host of the Oracle HSM file system that you want to share using NFS. Log in as root.

    In the examples below, the server name is mds1 and the client name is nfsclnt1.

    root@mds1:~# 
    
  2. If the Oracle HSM file system that you intend to share uses the WORM feature and is hosted on a server running under Oracle Solaris 10 or later, make sure that NFS version 4 is enabled on the NFS server and on all clients.

    In the example, we check the server mds1 and the client nfsclnt1. In each case, we first check the Solaris version level using the uname -r command. Then we pipe the output of the modinfo command to grep and a regular expression that find the NFS version information:

    root@mds1:~# uname -r
    5.11
    root@mds1:~# modinfo | grep -i "nfs.* version 4"
    258 7a600000  86cd0  28   1  nfs (network filesystem version 4)
    root@mds1:~# ssh root@nfsclnt1
    Pasword: ...
    root@nfsclnt1:~# uname -r
    5.11
    root@nfsclnt1:~# modinfo | grep -i "nfs.* version 4"
    278 fffffffff8cba000  9df68  27   1  nfs (network filesystem version 4)
    root@nfsclnt1:~# exit
    root@mds1:~# 
    
  3. If NFS version 4 is not enabled on a server running under Oracle Solaris 10 or later, log in as root on the server and on each client. Then use the sharectl set command to enable NFS 4:

    root@mds1:~# sharectl set -p server_versmax=4 nfs
    root@mds1:~# ssh root@nfsclnt1
    Password ...
    root@nfsclnt1:~# sharectl set -p server_versmax=4 nfs
    root@nfsclnt1:~# exit
    root@mds1:~# 
    
  4. Next, configure the NFS server on the Oracle HSM host.

Configure the NFS Server on the Oracle HSM Host

Before clients can successfully mount an Oracle HSM file system using Network File System (NFS), you must configure the NFS server so that it does not attempt to share the Oracle HSM file system before the file system has been successfully mounted on the host. Under Oracle Solaris 10 and subsequent versions of the operating system, the Service Management Facility (SMF) manages mounting of file systems at boot time. If you do not configure NFS using the procedure below, either the QFS mount or the NFS share will succeed and the other will fail.

  1. Log in to the metadata server (MDS) host of the Oracle HSM file system that you want to configure as an NFS share. Log in as root.

    In the examples below, the server name is mds1.

    root@mds1:~# 
    
  2. Export the existing NFS configuration to an XML manifest file by redirecting the output of the svccfg export /network/nfs/server command.

    In the example, we direct the exported configuration to the manifest file /var/tmp/server.xml:

    root@mds1:~# svccfg export /network/nfs/server > /var/tmp/server.xml
    root@mds1:~# 
    
  3. Open the manifest file in a text editor, and locate the filesystem-local dependency.

    In the example, we open the file in the vi editor. The entry for the filesystem-local dependency is listed immediately before the entry for the dependent nfs-server_multi-user-server:

    root@mds1:~# vi /var/tmp/server.xml
    <?xml version='1.0'?>
    <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
    <service_bundle type='manifest' name='export'>
      <service name='network/nfs/server' type='service' version='0'>
        ...
        <dependency name='filesystem-local' grouping='require_all' restart_on='error' type='service'>
          <service_fmri value='svc:/system/filesystem/local'/>
        </dependency>
        <dependent name='nfs-server_multi-user-server' restart_on='none'
            grouping='optional_all'>
          <service_fmri value='svc:/milestone/multi-user-server'/>
        </dependent>
        ...
    
  4. Immediately after the filesystem-local dependency, add a qfs dependency that mounts the QFS shared file system. Then save the file, and exit the editor.

    This will mount the Oracle HSM shared file system before the server tries to share it via NFS:

    <?xml version='1.0'?>
    <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
    <service_bundle type='manifest' name='export'>
      <service name='network/nfs/server' type='service' version='0'>
        ...
        <dependency name='filesystem-local' grouping='require_all' restart_on='error' type='service'>
          <service_fmri value='svc:/system/filesystem/local'/>
        </dependency>
        <dependency name='qfs' grouping='require_all' restart_on='error' type='service'>
          <service_fmri value='svc:/network/qfs/shared-mount:default'/>
        </dependency>
        <dependent name='nfs-server_multi-user-server' restart_on='none'
            grouping='optional_all'>
          <service_fmri value='svc:/milestone/multi-user-server'/>
        </dependent>
    :wq
    root@mds1:~# 
    
  5. Validate the manifest file using the svccfg validate command.

    root@mds1:~# svccfg validate /var/tmp/server.xml
    
  6. If the svccfg validate command reports errors, correct the errors and revalidate the file.

    In the example, the svccfg validate command returns XML parsing errors. We inadvertently omitted an ending tag </dependency> when saving the file. So we re-open the file in the vi editor and correct the problem:

    root@mds1:~# svccfg validate /var/tmp/server.xml
    /var/tmp/server.xml:75: parser error : Opening and ending tag mismatch: dependency line 29 and service
      </service>
                ^
    /var/tmp/server.xml:76: parser error : expected '>'
    </service_bundle>
             ^
    /var/tmp/server.xml:77: parser error : Premature end of data in tag service_bundle line 3
    ^
    svccfg: couldn't parse document
    root@mds1:~# vi /var/tmp/server.xml
    ...
    :wq
    root@mds1:~# 
    
  7. Once the svccfg validate command completes without error, disable NFS using the svcadm disable nfs/server command.

    In the example, the svccfg validate command returned no output, so the file is valid and we can disable NFS:

    root@mds1:~# svccfg validate /var/tmp/server.xml
    root@mds1:~# svcadm disable nfs/server
    
  8. Delete the existing NFS server configuration using the svccfg delete nfs/server command.

    root@mds1:~# svccfg delete nfs/server
    
  9. Import the manifest file into the Service Management Facility (SMF) using the svccfg import command.

    root@mds1:~# svccfg import /var/tmp/server.xml
    
  10. Re-enable NFS using the svcadm enable nfs/server command.

    NFS is configured to use the updated configuration.

    root@mds1:~# svcadm enable nfs/server
    
  11. Confirm that the qfs dependency has been applied. Make sure that the command svcs -d svc:/network/nfs/server:default displays the /network/qfs/shared-mount:default service:

    root@mds1:~# svcs -d svc:/network/nfs/server:default 
    STATE          STIME    FMRI
    ...
    online         Nov_01   svc:/network/qfs/shared-mount:default
    ...
    
  12. Next, share the Oracle HSM file system as an NFS share.

Share the Oracle HSM File System as an NFS Share

Share the Oracle HSM file system using the procedures described in the administration documentation for your version of the Oracle Solaris operating system. The steps below summarize the procedure for Solaris 11.1:

  1. Log in to the metadata server (MDS) host of the Oracle HSM file system that you want to share using NFS. Log in as root.

    In the examples below, the server name is mds1.

    root@mds1:~# 
    
  2. Enter the command line share -F nfs -o sharing-options sharepath where the -F switch specifies the nfs sharing protocol and sharepath is the path to the shared resource. If the optional -o parameter is used, sharing-options can include any of the following:

    • rw makes sharepath available with read and write privileges to all clients.

    • ro makes sharepath available with read-only privileges to all clients.

    • rw=clients makes sharepath available with read and write privileges to clients, a colon-delimited list of one or more clients that have access to the share.

    • ro=clients makes sharepath available with read-only privileges to clients, a colon-delimited list of one or more clients that have access to the share.

    In the example, we share the hqfs1 file system read/write with clients nfsclnt1 and nfsclnt2 and read-only with nfsclient3:

    root@mds1:~# share -F nfs -o rw=nfsclnt1:nfsclnt2 ro=nfsclient3 /hsm/hqfs1
    ...
    root@mds1:~# 
    

    When you enter the command, the system automatically restarts the NFS server daemon, nfsd. See the share_nfs man page for additional options and details.

  3. Check the sharing parameters using the command line share -F nfs.

    In the example, the command output shows that we have correctly configured the share:

    root@mds1:~# share -F nfs
    /hsm/hqfs1   sec=sys,rw=nfsclnt1:nfsclnt2,ro=nfsclient3
    root@mds1:~# 
    
  4. Next, mount the NFS-shared Oracle HSM file system on the NFS clients.

Mount the NFS-Shared Oracle HSM File System on the NFS Clients

Mount the NFS server's file system at a convenient mount point on client systems. For each client, proceed as follows:

  1. Log in to the client as root.

    In the example, the NFS client is named nfsclnt1:

    root@nfsclnt1:~# 
    
  2. Back up the operating system's /etc/vfstab file.

    root@nfsclnt1:~# cp /etc/vfstab /etc/vfstab.backup
    root@nfsclnt1:~# 
    
  3. Open the /etc/vfstab file in a text editor.

    In the example, we use the vi editor.

    root@nfsclnt1:~# vi /etc/vfstab
    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    
  4. In the first column of the /etc/vfstab file, name the file device that you want to mount by specifying the name of the NFS server and the mount point of the file system that you want to share, separated by a colon.

    In the example, the NFS server is named mds1, the shared file system is named hqfs1, and the mount point on the server is /hqfs1:

    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    mds1:/hsm/hqfs1
    
  5. In the second column of the /etc/vfstab file, enter a hyphen (-) so that the local system does not try to check the remote file system for consistency:

    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    mds1:/hsm/hqfs1 -
    
  6. In the third column of the /etc/vfstab file, enter the local mount point where you will mount the remote file system.

    In the example, the mount point will be the directory /mds1:

    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    mds1:/hsm/hqfs1 -       /mds1
    
  7. In the fourth column of the /etc/vfstab file, enter the file-system type nfs.

    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    mds1:/hsm/hqfs1 -       /mds1     nfs
    

    We use the nfs file-system type, because the client mounts the remote QFS file system as an NFS file system.

  8. In the fifth column of the /etc/vfstab file, enter a hyphen (-), because the local system is not checking the remote file system for consistency.

    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    mds1:/hsm/hqfs1 -       /mds1     nfs    -
    
  9. In the sixth column of the /etc/vfstab file, enter yes to mount the remote file system at boot or no to mount it manually, on demand.

    In the example, we enter yes:

    #File           Device                         Mount
    #Device         to      Mount     System fsck  at     Mount
    #to Mount       fsck    Point     Type   Pass  Boot   Options
    #------------   ------  --------- ------ ----  -----  ----------------
    /devices        -       /devices  devfs  -     no     -
    ...
    mds1:/hsm/hqfs1 -       /mds1     nfs    -     yes
    
  10. In the last column of the /etc/vfstab file, enter the hard and intr NFS mount option to force unlimited, uninterruptable retries or set a specified number of retries by entering the soft, retrans, and timeo mount options with retrans set to 120 or more and timeo set to 3000 tenths of a second.

    Setting the hard retry option or specifying the soft option with a sufficiently long timeout and sufficient numbers of retries keeps NFS requests from failing when the requested files reside on removable volumes that cannot be immediately mounted. See the Solaris mount_nfs man page for more information on these mount options.

    In the example, we enter the soft mount option:

    #File         Device                         Mount
    #Device       to      Mount     System fsck  at     Mount
    #to Mount     fsck    Point     Type   Pass  Boot   Options
    #------------ ------  --------- ------ ----  -----  ----------------
    /devices      -       /devices  devfs  -     no     -
    ...
    mds1:/hqfs1   -       /mds1     nfs    -     yes    soft,retrans=120,timeo=3000
    
  11. If you are using NFS 2, set the rsize mount parameter to 32768.

    Accept the default value for other versions of NFS.

    The rsize mount parameter sets the read buffer size to 32768 bytes (vs. the default, 8192 bytes). The example shows what an NFS 2 configuration would be like:

    #File         Device                         Mount
    #Device       to      Mount     System fsck  at     Mount
    #to Mount     fsck    Point     Type   Pass  Boot   Options
    #------------ ------  --------- ------ ----  -----  ----------------
    /devices      -       /devices  devfs  -     no     -
    ...
    mds12:/hqfs2  -       /mds12    nfs    -     yes    ...,rsize=32768 
    
  12. If you are using NFS 2, set the wsize mount parameter to 32768.

    Accept the default value for other versions of NFS.

    The wsize mount parameter sets the write buffer size to the specified number of bytes (by default, 8192 bytes). The example shows what an NFS 2 configuration would be like:

    #File         Device                         Mount
    #Device       to      Mount     System fsck  at     Mount
    #to Mount     fsck    Point     Type   Pass  Boot   Options
    #------------ ------  --------- ------ ----  -----  ----------------
    /devices      -       /devices  devfs  -     no     -
    ...
    mds12:/hqfs2  -       /mds12  nfs    -     yes    ...,wsize=32768 
    
  13. Save the /etc/vfstab file, and exit the editor.

    #File         Device                         Mount
    #Device       to      Mount     System fsck  at     Mount
    #to Mount     fsck    Point     Type   Pass  Boot   Options
    #------------ ------  --------- ------ ----  -----  ----------------
    /devices      -       /devices  devfs  -     no     -
    ...
    mds1:/hqfs1 -       /mds1   nfs    -     yes    soft,retrans=120,timeo=3000
    :wq
    root@nfsclnt1:~# 
    
  14. Create a mount point directory for the shared file system.

    In the example, we will mount the shared file system on a directory named /mds1:

    root@nfsclnt1:~# mkdir /mds1
    root@nfsclnt1:~# 
    
  15. Create the mount point specified in the /etc/vfstab file, and set the access permissions for the mount point.

    Users must have execute (x) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /mds1 mount-point directory and set permissions to 755 (-rwxr-xr-x):

    root@nfsclnt1:~# mkdir /mds1
    root@nfsclnt1:~# chmod 755 /mds1
    root@nfsclnt1:~# 
    
  16. Mount the shared file system:

    root@nfsclnt1:~# mount /mds1
    root@nfsclnt1:~# 
    
  17. If you plan on using the sideband database feature, go to "Configuring the Reporting Database".

  18. Otherwise, go to "Configuring Notifications and Logging".

Sharing Oracle HSM File Systems Using SMB/CIFS

SMB makes Oracle HSM accessible to Microsoft Windows hosts and provides interoperability features, such as case-insensitivity, support for DOS attributes, and support for NFSv4 Access Control Lists (ACLs). The Oracle Solaris OS provides a Server Message Block (SMB) protocol server and client implementation that includes support for numerous SMB dialects including NT LM 0.12 and Common Internet File System (CIFS).

Oracle HSM supports Windows Security Identifiers (SIDs). Windows identities no longer need to be explicitly defined using the idmap service or provided by the Active Directory service.

To configure SMB service with Oracle HSM file systems, carry out the following tasks:

Review Oracle Solaris SMB Configuration and Administration Documentation

The sections below outline the parts of the SMB configuration process as they apply to Oracle HSM file systems. They are not comprehensive and do not cover all possible scenarios. So review the full instructions for configuring Oracle Solaris SMB servers, integrating the servers into an existing Windows environment, and mounting SMB shares on Solaris systems. Full instructions can be found in the volume Managing SMB and Windows Interoperability in Oracle Solaris in the Oracle Solaris Information Library.

Explicitly Map Windows Identities for the SMB Server (Optional)

While Oracle HSM now fully supports Windows Security Identifiers (SIDs), explicitly defining the relationships between UNIX identities and SIDs continues to have advantages in some situations. For example, in heterogenous environments where users have both UNIX and Windows identities, you may wish to create explicit mappings using the idmap service or the Active Directory service. For full SMB and Windows interoperability information, see the product documentation for your version of Oracle Solaris.

Configure Oracle HSM File Systems for Sharing With SMB/CIFS

Oracle HSM file systems that are shared using SMB/CIFS must use the new Access Control List (ACL) implementation adopted by Network File System (NFS) version 4 and introduced in Oracle Solaris 11. Older versions of Solaris and NFS used ACLs that were based on a POSIX-draft specification that is not compatible with the Windows ACL implementation.

New file systems that you create with Oracle HSM use NFS version 4 ACLs by default on Solaris 11. But, if you need to share existing Oracle HSM file systems with SMB/CIFS clients, you must convert the existing POSIX-style ACLs using the appropriate procedure:

Convert an Oracle HSM Unshared File System that Uses POSIX-Style ACLs

Proceed as follows:

  1. Log in to the host as root.

    In the example, we log in to the host mds1:

    root@mds1:~# 
    
  2. Make sure that the host runs Oracle Solaris 11.1 or higher. Use the command uname -r.

    root@mds1:~# uname -r
    5.11
    root@mds1:~# 
    
  3. Unmount the file system using the command umount mount-point, where mount-point is the mount point of the Oracle HSM file system.

    See the umount_samfs man page for further details. In the examples below, the server name is mds1 and the file system is shrfs1:

    root@mds1:~# umount /hsm/shrfs1
    
  4. Convert the file system using the samfsck -F -A file-system command, where the -F option specifies a check and repair of the file system, the -A option specifies conversion of the ACLs, and file-system is the name of the file system that you need to convert.

    The -F option is required when the -A option is specified. If the samfsck -F -A command returns errors, the process aborts and no ACLs are converted (for full descriptions of these options, see the samfsck man page).

    root@mds1:~# samfsck -F -A /hsm/shrfs1
    
  5. If errors are returned and no ACLs are converted, use the samfsck -F -a file-system command to forcibly convert the ACLs.

    The -a option specifies a forced conversion. The -F option is required when the -a option is specified (for full descriptions of these options, see the samfsck man page).

    root@mds1:~# samfsck -F -a /hsm/shrfs1
    
  6. Now, configure the SMB server for Windows Active Directory Domains or Workgroups.

Convert an Oracle HSM Shared File System that Uses POSIX-Style ACLs
  1. Log in to the file-system metadata server as root.

    In the example, we log in to the metadata server mds1:

    root@mds1:~# 
    
  2. Make sure that the metadata server runs Oracle Solaris 11.1 or higher. Use the command uname -r.

    root@mds1:~# uname -r
    5.11
    root@mds1:~# 
    
  3. Log in to each Oracle HSM client as root, and make sure that each client runs Oracle Solaris 11.1 or higher.

    In the example, we open terminal windows and remotely log in to client hosts clnt1 and clnt2 using ssh to get the Solaris version from the log-in banner:

    root@mds1:~# ssh root@clnt1
    Password:
    Oracle Corporation      SunOS 5.11      11.3    October 2015
    root@clnt1:~# 
    
    root@mds1:~# ssh root@clnt2
    Password:
    Oracle Corporation      SunOS 5.11      11.3    October 2015
    root@clnt2:~# 
    
  4. Unmount the Oracle HSM shared file system from each Oracle HSM client using the command umount mount-point, where mount-point is the mount point of the Oracle HSM file system.

    See the umount_samfs man page for further details. In the example, we unmount shrfs1 from our two clients, clnt1 and clnt2:

    Oracle Corporation      SunOS 5.11      11.3    October 2015
    root@clnt1:~# umount /hsm/shrfs1
    root@clnt1:~# 
    
    Oracle Corporation      SunOS 5.11      11.3    October 2015
    root@clnt2:~# umount /hsm/shrfs1
    root@clnt1:~# 
    
  5. Unmount the Oracle HSM shared file system from the metadata server using the command umount -o await_clients=interval mount-point, where mount-point is the mount point of the Oracle HSM file system and interval is the delay in seconds specified by the -o await_clients option delays execution.

    When the umount command is issued on the metadata server of an Oracle HSM shared file system, the -o await_clients option makes umount wait the specified number of seconds so that clients have time to unmount the share. It has no effect if you unmount an unshared file system or issue the command on an Oracle HSM client. See the umount_samfs man page for further details.

    In the example, we unmount the shrfs1 file system from the metadata server mds1 while allowing 60 seconds for clients to unmount:

    root@mds1:~# umount -o await_clients=60 /hsm/shrfs1
    
  6. Convert the file system from the POSIX-style ACLs to NFS version 4 ACLs. On the metadata server, use the command samfsck -F -A file-system, where the -F option specifies a check and repair of the file system, the -A option specifies conversion of the ACLs, and file-system is the name of the file system that you need to convert.

    The -F option is required when the -A option is specified. If samfsck -F -A file-system command returns errors, the process aborts and no ACLs are converted (for full descriptions of these options, see the samfsck man page). In the example, we convert an Oracle HSM file system named shrfs1:

    root@mds1:~# samfsck -F -A /hsm/shrfs1
    
  7. If errors are returned and no ACLs are converted, forcibly convert the ACLs. On the metadata server, use the samfsck -F -a file-system command.

    The -a option specifies a forced conversion. The -F option is required when the -a option is specified (for full descriptions of these options, see the samfsck man page). In the example, we forcibly convert the Oracle HSM file system named /qfsma:

    root@mds1:~# samfsck -F -a /hsm/shrfs1
    
  8. Now, configure the SMB server for Windows Active Directory Domains or Workgroups.

Configure the SMB Server for Windows Active Directory Domains or Workgroups

Oracle Solaris SMB services can operate in either of two, mutually exclusive modes: domain or workgroup. Choose one or the other based on your environment and authentication needs:

Configure the SMB Server in Domain Mode
  1. Contact the Windows Active Directory administrator and obtain the following information:

    • the name of the authenticated Active Directory user account that you need to use when joining the Active Directory domain

    • the organizational unit that you need to use in place of the default Computers container for the account (if any)

    • the fully qualified LDAP/DNS domain name for the domain where the Oracle HSM file system is to be shared.

  2. Log in to the host of the Oracle HSM file system that you want to configure as an SMB/CIFS share. Log in as root.

    If the file system is an Oracle HSM shared file system, log in to the metadata server for the file system. In the examples below, the server name is mds1.

    root@mds1:~# 
    
  3. Open-source Samba and SMB servers cannot be used together on a single Oracle Solaris system. So see if the Samba service is running. Pipe the output of the services status command svcs into grep and the regular expression samba.

    In the example, the output of the svcs command contains a match for the regular expression, so the SMB service is running:

    root@mds1:~# svcs | grep samba
    legacy_run     Nov_03   lrc:/etc/rc3_d/S90samba
    
  4. If the Samba service (svc:/network/samba) is running, disable it along with the Windows Internet Naming Service/WINS (svc:/network/wins), if running. Use the command svcadm disable.

    root@mds1:~# svcadm disable svc:/network/samba
    root@mds1:~# svcadm disable svc:/network/wins
    
  5. Now use the svcadm enable -r smb/server command to start the SMB server and any services on which it depends.

    root@mds1:~# svcadm enable -r smb/server
    
  6. Make sure that the system clock on the Oracle HSM host is within five minutes of the system clock of the Microsoft Windows domain controller:

    • If the Windows domain controller uses Network Time Protocol (NTP) servers, configure the Oracle HSM host to use the same servers. Create an /etc/inet/ntpclient.conf file on the Oracle HSM host and start the ntpd daemon using the command svcadm enable ntp (see the ntpd man page and your Oracle Solaris administration documentation for full information).

    • Otherwise, synchronize the Oracle HSM host with the domain controller by running the command ntpdate domain-controller-name (see the ntpdate man page for details) or manually set the system clock on the Oracle HSM host to the time displayed by the domain controller's system clock.

  7. Join the Windows domain using the command smbadm join -u username -o organizational-unit domain-name, where username is the name of the user account specified by the Active Directory administrator, the optional organizational-unit is the account container specified (if any), and domain-name is the specified, fully qualified, LDAP or DNS domain name.

    In the example, we join the Windows domain this.example.com using the user account

    root@mds1:~# smbadm join -u admin -o smbsharing this.example.com
    
  8. Now share the Oracle HSM file system as an SMB/CIFS share.

Configure the SMB Server in Workgroup Mode
  1. Contact the Windows network administrator and obtain the name of the Windows workgroup that the host of the Oracle HSM file system should join.

    The default workgroup is named WORKGROUP.

  2. Log in to the host of the Oracle HSM file system. Log in as root.

    If the file system is an Oracle HSM shared file system, log in to the metadata server for the file system. In the examples below, the server name is mds1.

    root@mds1:~# 
    
  3. Open-source Samba and SMB servers cannot be used together on a single Oracle Solaris system. So see if Samba service is running. Pipe the output of the svcs services status command into grep and the regular expression samba.

    In the example, the output of the svcs command contains a match for the regular expression, so the SMB service is running:

    root@mds1:~# svcs | grep samba
    legacy_run     Nov_03   lrc:/etc/rc3_d/S90samba
    
  4. If the Samba service (svc:/network/samba) is running, disable it along with the Windows Internet Naming Service/WINS (svc:/network/wins) services, if running. Use the command svcadm disable.

    Samba and SMB servers cannot be used together on a single Oracle Solaris system.

    root@mds1:~# svcadm disable svc:/network/samba
    root@mds1:~# svcadm disable svc:/network/wins
    
  5. Now use the command svcadm enable -r smb/server to start the SMB server and any services on which it depends.

    root@mds1:~# svcadm enable -r smb/server
    
  6. Join the workgroup. Use the command smbadm join with the -w (workgroup) switch and the name of the workgroup specified by the Windows network administrator.

    In the example, the specified workgroup is named crossplatform.

    root@mds1:~# smbadm join -w crossplatform
    
  7. Configure the Oracle HSM host for encryption of SMB passwords. Open the /etc/pam.d/other file in a text editor, add the command line password required pam_smb_passwd.so.1 nowarn, and save the file.

    In the example, we use the vi editor:

    root@mds1:~# vi /etc/pam.d/other
    # Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved.
    #
    # PAM configuration
    #
    # Default definitions for Authentication management
    # Used when service name is not explicitly mentioned for authentication
    #
    auth definitive         pam_user_policy.so.1
    ...
    password required       pam_authtok_store.so.1
    password required pam_smb_passwd.so.1 nowarn
    :wq
    root@mds1:~# 
    

    See the pam_smb_passwd man page for further details.

  8. Once the pam_smb_passwd module has been installed, use the command passwd local-username to generate an encrypted version of the password for user local-username so that the SMB server can log in to the Windows workgroup.

    The SMB server cannot authenticate users using the same encrypted versions of passwords that the Solaris operating system uses. In the example, we generate an encrypted SMB password for the user smbsamqfs:

    root@mds1:~# passwd smbsamqfs
    
  9. Now share the Oracle HSM file system as an SMB/CIFS share.

Share the Oracle HSM File System as an SMB/CIFS Share

Share the Oracle HSM file system using the procedures described in the administration documentation for your version of the Oracle Solaris operating system. The steps below summarize the procedure for Solaris 11.1:

  1. Log in to the host of the Oracle HSM file system that you want to configure as an SMB/CIFS share. Log in as root.

    If the file system is an Oracle HSM shared file system, log in to the metadata server for the file system. In the examples below, the server name is mds1.

    root@mds1:~# 
    
  2. Configure the share. Use the command share -F smb -o specific-options sharepath sharename, where the -F switch specifies the smb sharing protocol, sharepath is the path to the shared resource, and sharename is the name that you want to use for the share. The value of the optional -o parameter, sharing-options, is a comma-delimited list that includes any of the following:

    • abe=[true|false]

      When the access-based enumeration (ABE) policy for a share is true, directory entries to which the requesting user has no access are omitted from directory listings returned to the client.

    • ad-container=cn=user,ou=organization,dc=domain-dns

      The Active Directory container limits the share access to domain objects specified by the Lightweight Directory Access Protocol (LDAP) relative distinguished name (RDN) attribute values: cn (user object class), ou (organizational unit object class), and dc (domain DNS object class).

      For full information on using Active Directory containers with SMB/CIFS, consult Internet Engineering Task Force Request For Comment (RFC) 2253 and your Microsoft Windows directory services documentation.

    • catia=[true|false]

      When CATIA character substitution is true, any characters in a CATIA version 4 file name that are illegal in Windows are replaced by legal equivalents. See the share_smb man page for a list of substitutions.

    • csc=[manual|auto|vdo|disabled]

      A client-side caching (csc) policy controls client-side caching of files for offline use. The manual policy lets clients cache files when requested by users, but disables automatic, file-by-file reintegration (this is the default). The auto policy lets clients automatically cache files and enables automatic file-by-file reintegration. The vdo policy lets clients automatically cache files for offline use, enables file-by-file reintegration, and lets clients work from the local cache even while offline. The disabled policy does not allow client-side caching.

    • dfsroot=[true|false]

      In a Microsoft Distributed File System (DFS), a root share (dfsroot=true) is the share that organizes a group of widely distributed shared folders into a single DFS file system that can be more easily managed. For full information, see your Microsoft Windows Server documentation.

    • guestok=[true|false]

      When the guestok policy is true, the locally defined guest account can access the share. When it is false or left undefined (the default), the guest account cannot access the share. This policy lets you map the Windows Guest user to a locally defined, UNIX user name, such as guest or nobody:

      # idmap add winname:Guest unixuser:guest
      

      The locally defined account can then be authenticated against a password stored in /var/smb/smbpasswd, if desired. See the idmap man page for more information.

    • rw=[*|[[-]criterion][:[-]criterion]...

      The rw policy grants or denies access to any client that matches the supplied access list.

      Access lists contain either a single asterisk (*) meaning all or a colon-delimited list of client access criteria, where each criterion consists of an optional minus sign (-), meaning deny, followed by a host name, a network group, a full LDAP or DNS domain name, and/or the symbol @ plus all or part of an IP address or domain name. Access lists are evaluated left to right until the client satisfies one of the criteria. See the share_smb man page for further details.

    • ro=[*|[[-]criterion][:[-]criterion]...

      The ro policy grants or denies read-only access to any client that matches the access list.

    • none=[*|[[-]criterion][:[-]criterion]...

      The none policy denies access to any client that matches the access list. If the access list is an asterisk (*), the ro and rw policies can override the none policy.

    In the example, we share the shrfs1 file system read/write with clients smbclnt1 and smbclnt2 and read-only with smbclient3:

    root@mds1:~# share -F smb -o rw=smbclnt1:smbclnt2,ro=smbclient3 /hsm/shrfs1
    

    When you enter the command, the system automatically restarts the SMB server daemon, smbd.

  3. Check the sharing parameters. Use the command share -F smb -A.

    In the example, the command output shows that we have correctly configured the share:

    root@mds1:~# share -F smb /hsm/shrfs1
    sec=sys,rw=smbclnt1:smbclnt2,ro=smbclient3
    root@mds1:~# 
    
  4. If you plan on using the sideband database feature, go to "Configuring the Reporting Database".

  5. Otherwise, go to "Configuring Notifications and Logging".