StorageTek Storage Archive Manager and StorageTek QFS Software Installation and Configuration Guide Release 5.4 E42062-02 |
|
Previous |
Next |
SAM-QFS file systems can be shared among multiple hosts in any of several ways. Each approach has particular strengths in some situations and notable drawbacks in others. So the method that you choose depends on your specific requirements. Sharing methods include:
Accessing File Systems from Multiple Hosts Using SAM-QFS Software
Accessing File Systems from Multiple Hosts Using NFS and SMB/CIFS
SAM-QFS makes file systems available to multiple hosts by configuring a server and one or more clients that all mount the file system simultaneously. File data is then passed directly from the disk devices to the hosts via high-performance, local-path I/O, without the network and intermediate server latencies associated with NFS and CIFS share. Only one host can be active as a metadata server at any one time, but any number of clients can be configured as potential metadata servers for redundancy purposes. There is no limit to the number of file-system mount points.
SAM-QFS supports multi-host access to both high-performance (ma
) and general-purpose (ms
) file systems in both multi-reader/single-writer and shared configurations, with or without archiving. There are only a few limitations:
Block (b
–) special files are not supported.
Character (c
–) special files are not supported.
FIFO named pipe (p
–) special files are not supported.
Segmented files are not supported.
You cannot implement a SAM-QFS shared file system in a segmented-file environment.
Mandatory locks are not supported.
An EACCES
error is returned if the mandatory lock is set. Advisory locks are supported, however. For more information about advisory locks, see the fcntl
man page.
SAM-QFS software hosts can access file system data in either of two ways:
Each has advantages and limitations in any given application.
In a multi-reader, single-writer configuration, a single host mounts the file system with read/write access and all other hosts mount it read-only. Configuration is a simple matter of setting mount options. Since a single host makes all changes to the files, file consistency and data integrity are insured, without additional file locking or consistency checks. All hosts read metadata as well as data directly from the disk for best performance. But all hosts must have access to file-system metadata, so all hosts in an ma
file system must have access to both data and metadata devices.
In a shared configuration, all hosts can read, write, and append file data, using leases that allow a single host to access files in a given way for a given period of time. The metadata server issues read, write, and append leases and manages renewals and conflicting lease requests. Shared file systems offer great flexibility, but configuration is a bit more complex and there is more file-system overhead. All hosts read file data directly from disk, but clients access metadata over the network. So clients that lack access to metadata devices can share an ma
file system.
To configure a single-writer, multiple-reader file system, carry out the following tasks:
Proceed as follows:
Log in to the host that will serve as the writer
using the root
account.
[swriterfs1-mds]root@solaris:~#
On the host that will serve as the writer
, open the /etc/opt/SUNWsamfs/mcf
file in a text editor and add a QFS file system. You can Configure a General-Purpose ms
File System or Configure a High-Performance ma
File System.
On an ma
file system with separate metadata devices, configure the metadata server for the file system as the writer. In the example below, we edit the mcf
file on the host swriterfs1-writer
using the vi
text editor. The example specifies an ma
file system with the equipment identifier and family set name swriterfs1
and the equipment ordinal number 300
:
[swriterfs1-writer]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ---------------swriterfs1
300
ma
swriterfs1
on
/dev/dsk/c0t0d0s0 301 mm swriterfs1 on
/dev/dsk/c0t3d0s0 302 mr swriterfs1 on
/dev/dsk/c0t3d0s1 303 mr swriterfs1 on
Save the /etc/opt/SUNWsamfs/mcf
file, and quit the editor.
In the example, we save the changes and exit the vi
editor:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- --------- ------ ---------------
swriterfs1 300 ma swriterfs1 on
/dev/dsk/c0t0d0s0 301 mm swriterfs1 on
/dev/dsk/c0t3d0s0 302 mr swriterfs1 on
/dev/dsk/c0t3d0s1 303 mr swriterfs1 on
:wq
[swriterfs1-writer]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
[swriterfs1-mds]root@solaris:~# sam-fsd
Tell the SAM-QFS service to re-read the mcf
file and reconfigure itself accordingly. Use the command samd
config
.
[swriterfs1-writer]root@solaris:~#samd
config
Create the file system using the sammkfs
command and the family set name of the file system, as described in "Configure a High-Performance ma
File System".
In the example, the command creates the single-writer/multi-reader file system swriterfs1
:
[swriterfs1-writer]root@solaris:~#sammkfs
swriterfs1
Building 'swriterfs1' will destroy the contents of devices: /dev/dsk/c0t0d0s0 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s1 Do you wish to continue? [y/N]yes
...
Back up the operating system's /etc/vfstab
file.
[swriterfs1-writer]root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Add the new file system to the operating system's /etc/vfstab
file, as described in "Configure a High-Performance ma
File System".
In the example, we open the /etc/vfstab
file in the vi
text editor and add a line for the swriterfs1
family set device:
[swriterfs1-writer]root@solaris:~# vi /etc/vfstab
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -----------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
swriterfs1 - /swriterfs1 samfs - no
In the Mount Options
column of the /etc/vfstab
file, enter the writer
mount option.
Caution: Allowing more than one host to mount a multiple-reader, single-writer file system using thewriter option can corrupt the file system! You must make sure that only one host is the writer at any given time! |
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -----------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
swriterfs1 - /swriterfs1 samfs - no writer
Make any other desired changes to the /etc/vfstab
file. Add mount options using commas as separators.
For example, to mount the file system automatically at boot time, enter yes
in the Mount at Boot
field. To mount the file system in the background if the first attempt does not succeed, add the bg
mount option to the Mount Options
field (see the mount_samfs
man page for a comprehensive list of available mount options):
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ----------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ... swriterfs1 - /swriterfs1 samfs -yes
writer,bg
Save the /etc/vfstab
file, and quit the editor.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -----------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
swriterfs1 - /swriterfs1 samfs - yes writer,bg
:wq
[swriterfs1-writer]root@solaris:~#
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same on all hosts, and users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /swriterfs1
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[swriterfs1-writer]root@solaris:~#mkdir /swriterfs1
[swriterfs1-writer]root@solaris:~#chmod 755 /swriterfs1
Mount the new file system:
[swriterfs1-writer]root@solaris:~# mount /swriterfs1
Once the shared file system has been created, Configure the Readers.
For each host that you are configuring as a reader (a host that mounts the file system read-only), proceed as follows:
Log in to the host as root
.
In a terminal window, retrieve the configuration information for the multiple-reader, single-writer file system using the samfsconfig
device-path
command, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/dsk/*
).
The samfsconfig
utility retrieves file-system configuration information by reading the identifying superblock that sammkfs
writes on each device that is included in a SAM-QFS file system. The command returns the correct paths to each device in the configuration starting from the current host and flags devices that cannot be reached (for full information on command syntax and parameters, see the samfsconfig
man page).
In the example, the samfsconfig
output shows the same equipment listed in the mcf
file on swriterfs1-writer
, except that the paths to the devices are specified starting from the host swriterfs1-reader1
:
[swriterfs1-reader1]root@solaris:~#samfsconfig /dev/dsk/*
# Family Set 'swriterfs1' Created Thu Nov 21 07:17:00 2013 # Generation 0 Eq count 4 Eq meta count 1 # sharefs 300 ma sharefs - /dev/dsk/c1
t0d0s0 301 mm sharefs - /dev/dsk/c1
t3d0s0 302 mr sharefs - /dev/dsk/c1
t3d0s1 303 mr sharefs -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the file /etc/opt/SUNWsamfs/mcf
in a text editor, and paste the copied entries into the file.
Alternatively, you could redirect the output of samfsconfig
to the mcf
file or use the samd buildmcf
command to run samfsconfig
and create the client mcf
file automatically.
In the example, the mcf
file for the host, swriterfs1-reader1
looks like this once we add the commented out column headings:
[swriterfs1-reader1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- --------- ------ ---------------
sharefs 300 ma sharefs -
/dev/dsk/c1t0d0s0 301 mm sharefs -
/dev/dsk/c1t3d0s0 302 mr sharefs -
/dev/dsk/c1t3d0s1 303 mr sharefs -
Make sure that the Device State
field is set to on
for all devices. Then save the mcf
file.
[swriterfs1-reader1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefson
/dev/dsk/c1t0d0s0 301 mm sharefson
/dev/dsk/c1t3d0s0 302 mr sharefson
/dev/dsk/c1t3d0s1 303 mr sharefson
:wq
[swriterfs1-reader1]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
[swriterfs1-reader1]root@solaris:~# sam-fsd
Back up the operating system's /etc/vfstab
file.
[swriterfs1-reader1]root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Add the single-writer, multiple-reader file system to the host operating system's /etc/vfstab
file.
In the example, we open the /etc/vfstab
file in the vi
text editor and add a line for the swriterfs1
family set device:
[swriterfs1-reader1]root@solaris:~# vi /etc/vfstab
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -----------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
swriterfs1 - /swriterfs1 samfs - no
In the Mount Options
column of the /etc/vfstab
file, enter the reader
option.
Caution: Make sure that the host mounts the file system using thereader option! Inadvertently using the writer mount option on more than one host can corrupt the file system! |
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -----------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
swriterfs1 - /swriterfs1 samfs - no reader
Add any other desired mount options using commas as separators, and make any other desired changes to the /etc/vfstab
file. Then save the /etc/vfstab
file.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -----------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
swriterfs1 - /swriterfs1 samfs - yes writer,bg
:wq
[swriterfs1-reader1]root@solaris:~#
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same on the on all hosts, and users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /swriterfs1
mount-point directory and set permissions to 755
(-rwxr-xr-x
), just as we did on the writer host:
[swriterfs1-reader1]root@solaris:~#mkdir /swriterfs1
[swriterfs1-reader1]root@solaris:~#chmod 755 /swriterfs1
Mount the new file system:
[swriterfs1-reader1]root@solaris:~# mount /swriterfs1
Repeat this procedure until all reader hosts have been configured to mount the file system read-only.
Stop here. You have configured the SAM-QFS multiple-reader, single-writer file system.
SAM-QFS shared file systems give multiple SAM-QFS hosts read, write, and append access to files. All hosts mount the file system and have direct connections to the storage devices. In addition, one host, the metadata server (MDS), has exclusive control over file-system metadata and mediates between hosts seeking access to the same files. The server provides client hosts with metadata updates via an Ethernet local network and controls file access by issuing, renewing, and revoking read, write, and append leases. Both non-archiving and archiving file systems of either the high-performance ma
or general-purpose ms
type can be shared.
To configure a shared file system, carry out the following tasks:
To configure a metadata server to support a shared file system, carry out the tasks listed below:
On the active metadata server, you must create a hosts file that lists network address information for the servers and clients of a shared file system. The hosts file is stored alongside the mcf
file in the /etc/opt/SUNWsamfs/
directory. During the initial creation of a shared file system, the sammkfs -S
command configures sharing using the settings stored in this file. So create it now, using the procedure below.
Log in to the server as root
.
[sharefs-mds]root@solaris:~#
Using a text editor, create the file /etc/opt/SUNWsamfs/hosts.
family-set-name
on the metadata server, replacing family-set-name
with the name of the family-set name of the file-system that you intend to share.
In the example, we create the file hosts.sharefs
using the vi
text editor. We add some optional headings, starting each line with a hash sign (#
), indicating a comment:
[sharefs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.sharefs # /etc/opt/SUNWsamfs/hosts.sharefs # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ----------
Add the hostname and IP address or domain name of the of the metadata server in two columns, separated by whitespace characters.
# /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117
Add a third column, separated from the network address by whitespace characters. In this column, enter the ordinal number of the server (1
for the active metadata server, 2
for the first potential metadata server, and so on).
In this example, there is only one metadata server, so we enter 1
:
# /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117 1
Add a fourth column, separated from the network address by whitespace characters. In this column, enter 0
(zero).
A 0
, -
(hyphen), or blank value in the fourth column indicates that the host is on—configured with access to the shared file system. A 1
(numeral one) marks the host indicates that the host is off—configured but without access to the file system (for information on using these values when administering shared file systems, see the samsharefs
man page).
# /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117 1 0
Add a fifth column, separated from the network address by whitespace characters. In this column, enter the keyword server
to indicate the currently active metadata server:
# /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117 1 0 server
If you plan to include one or more hosts as a potential metadata servers, create an entry for each. Increment the server ordinal each time. But do not include the server
keyword (there can be only one active metadata server per file system).
In the example, the host sharefs-mds_alt
is a potential metadata server with the server ordinal 2
. Until and unless we activate it as a metadata server, it will be a client reader:
# /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117 1 0 server
sharefs-mds_alt 10.79.213.217 2
0
Add a line for each client host, each with a server ordinal value of 0
.
A server ordinal of 0
identifies the host as a client. In the example, we add two clients, sharefs-client1
and sharefs-client2
.
# /etc/opt/SUNWsamfs/hosts.sharefs # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #------------------ ---------------------- ------- --- ---------- sharefs-mds 10.79.213.117 1 0 server sharefs-mds_alt 10.79.213.217 2 0sharefs-client1 10.79.213.133 0
0sharefs-client2 10.79.213.147 0
0
Save the /etc/opt/SUNWsamfs/hosts.
family-set-name
file, and quit the editor.
In the example, we save the changes to /etc/opt/SUNWsamfs/hosts.sharefs
and exit the vi
editor:
# /etc/opt/SUNWsamfs/hosts.sharefs
# Server On/ Additional
#Host Name Network Interface Ordinal Off Parameters
#------------------ ---------------------- ------- --- ----------
sharefs-mds 10.79.213.117 1 0 server
sharefs-mds_alt 10.79.213.217 2 0
sharefs-client1 10.79.213.133 0 0
sharefs-client2 10.79.213.147 0 0
:wq
[sharefs-mds]root@solaris:~#
Place a copy of the new /etc/opt/SUNWsamfs/hosts.
family-set-name
file on any potential metadata servers that are included in the shared file-system configuration.
Proceed as follows:
Log in to the server as root
.
[sharefs-mds]root@solaris:~#
On the metadata server (MDS), open the /etc/opt/SUNWsamfs/mcf
file in a text editor and add a QFS file system. You can either Configure a General-Purpose ms
File System or Configure a High-Performance ma
File System.
In the example below, we edit the mcf
file on the host sharefs-mds
using the vi
text editor. The example specifies an ma
file system with the equipment identifier and family set name sharefs
and the equipment ordinal number 300
:
[sharefs-mds]root@solaris:~# vi/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ---------------sharefs
300
ma
sharefs
on
/dev/dsk/c0t0d0s0 301 mm sharefs on
/dev/dsk/c0t3d0s0 302 mr sharefs on
/dev/dsk/c0t3d0s1 303 mr sharefs on
In the Additional Parameters
field of the row for the ma
file-system equipment, enter the shared
parameter:
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- --------- ------ ---------------
sharefs 300 ma sharefs on shared
/dev/dsk/c0t0d0s0 301 mm sharefs on
/dev/dsk/c0t3d0s0 302 mr sharefs on
/dev/dsk/c0t3d0s1 303 mr sharefs on
Save the /etc/opt/SUNWsamfs/mcf
file, and quit the editor.
In the example, we save the changes and exit the vi
editor:
sharefs 300 ma sharefs on shared
/dev/dsk/c0t0d0s0 301 mm sharefs on
/dev/dsk/c0t3d0s0 302 mr sharefs on
/dev/dsk/c0t3d0s1 303 mr sharefs on
:wq
[sharefs-mds]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
[sharefs-mds]root@solaris:~# sam-fsd
Tell the SAM-QFS service to reread the mcf
file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.
[sharefs-mds]root@solaris:~# samd config
Create the file system using the sammkfs -S
command and the family set name of the file system, as described in "Configure a High-Performance ma
File System".
The sammkfs
command reads the hosts.
family-set-name
and mcf
files and creates a shared file system with the specified properties. In the example, the command reads the sharing parameters from the hosts.sharefs
file and creates the shared file system sharefs
:
[sharefs-mds]root@solaris:~#sammkfs -S
sharefs
Building 'sharefs' will destroy the contents of devices: /dev/dsk/c0t0d0s0 /dev/dsk/c0t3d0s0 /dev/dsk/c0t3d0s1 Do you wish to continue? [y/N]yes
...
Log in to the server as root
.
[sharefs-mds]root@solaris:~#
Back up the operating system's /etc/vfstab
file.
[sharefs-mds]root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Add the new file system to the operating system's /etc/vfstab
file, as described in "Configure a High-Performance ma
File System".
In the example, we open the /etc/vfstab
file in the vi
text editor and add a line for the sharefs
family set device:
[sharefs-mds]root@solaris:~# vi/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...sharefs - /sharefs samfs - yes
In the Mount Options
column, enter the shared
option:
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sharefs - /sharefs samfs - yes shared
Make any other desired changes to the /etc/vfstab
file.
For example, to retry mounting the file system in the background if the initial attempt does not succeed, add the bg
mount option to the Mount Options
field (for a full description of available mount options, see the mount_samfs
man page):
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sharefs - /sharefs samfs - yes shared,bg
Save the /etc/vfstab
file, and quit the editor.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sharefs - /sharefs samfs - no shared,bg
:wq
[sharefs-mds]root@solaris:~#
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same on the metadata server and on all clients, and users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /sharefs
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[sharefs-mds]root@solaris:~#mkdir /sharefs
[sharefs-mds]root@solaris:~#chmod 755 /sharefs
Mount the new file system:
[sharefs-mds]root@solaris:~# mount /sharefs
If your hosts are configured with multiple network interfaces, consider "Use Local Hosts Files to Route Network Communications".
Otherwise, once the shared file system has been created on the metadata server, start "Configuring File System Clients for Sharing".
Clients include both hosts that are configured purely as clients and those that are configured as potential metadata servers. In most respects, configuring a client is much the same as configuring a server. Each client includes exactly the same devices as the server. Only the mount options and the exact path to the devices changes (controller numbers are assigned by each client host and may thus vary).
To configure one or more clients to support a shared file system, carry out the tasks listed below:
For each client, proceed as follows:
On the client, log in as root
.
[sharefs-client1]root@solaris:~#
In a terminal window, retrieve the configuration information for the shared file system using the samfsconfig
device-path
command, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/dsk/*
or /dev/zvol/dsk/rpool/*
).
[sharefs-client1]root@solaris:~# samfsconfig /dev/dsk/*
If the host has access to the metadata devices for the file system and is thus suitable for use as a potential metadata server, the samfsconfig
output closely resembles the mcf
file that you created on the file-system metadata server.
In our example, host sharefs-client1
has access to the metadata devices (equipment type mm
), so the command output shows the same equipment listed in the mcf
file on the server, sharefs-mds
, only the host-assigned device controller numbers differ:
[sharefs-client1]root@solaris:~#samfsconfig /dev/dsk/*
# Family Set 'sharefs' Created Thu Feb 21 07:17:00 2013 # Generation 0 Eq count 4 Eq meta count 1 # sharefs 300 ma sharefs - /dev/dsk/c1
t0d0s0 301 mm sharefs - /dev/dsk/c1
t3d0s0 302 mr sharefs - /dev/dsk/c1
t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file system, the samfsconfig
command cannot find the metadata devices and thus cannot fit the SAM-QFS devices that it discovers into the file-system configuration. The command output lists Ordinal 0
—the metadata device—under Missing Slices
, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.
In our example, host sharefs-client2
has access to the data devices only. So the samfsconfig
output looks like this:
[sharefs-client2
]root@solaris:~#samfsconfig /dev/dsk/*
# Family Set 'sharefs' Created Thu Feb 21 07:17:00 2013 ## Missing slices
# Ordinal 0
#
/dev/dsk/c4t3d0s0 302 mr sharefs -#
/dev/dsk/c4t3d0s1 303 mr sharefs -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and paste the copied entries into the file.
In our first example, the host, sharefs-client1
, has access to the metadata devices for the file system, so the mcf
file starts out looking like this:
[sharefs-client1
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs - /dev/dsk/c1t0d0s0 301 mm sharefs - /dev/dsk/c1t3d0s0 302 mr sharefs - /dev/dsk/c1t3d0s1 303 mr sharefs -
In the second example, the host, sharefs-client2
, does not have access to the metadata devices for the file system, so the mcf
file starts out looking like this:
[sharefs-client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- # /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
If the host has access to the metadata devices for the file system, add the shared
parameter to the Additional Parameters
field of the entry for the shared file system.
In the example, the host, sharefs-client1
, has access to the metadata:
[sharefs-client1
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs -shared
/dev/dsk/c1t0d0s0 301 mm sharefs - /dev/dsk/c1t3d0s0 302 mr sharefs - /dev/dsk/c1t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file-system, add a line for the shared file system and include the shared
parameter
[sharefs-client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ---------------sharefs 300 ma sharefs - shared
# /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file system, add a line for the metadata device. Set the Equipment Identifier
field to nodev
(no device) and set the remaining fields to exactly the same values as they have on the metadata server:
[sharefs-client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on sharednodev
301 mm sharefs on
# /dev/dsk/c4t3d0s0 302 mr sharefs - # /dev/dsk/c4t3d0s1 303 mr sharefs -
If the host does not have access to the metadata devices for the file system, uncomment the entries for the data devices.
[sharefs-client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ----------------- sharefs 300 ma sharefs on shared nodev 301 mm sharefs on/dev/dsk/c4t3d0s0 302 mr sharefs -
/dev/dsk/c4t3d0s1 303 mr sharefs -
Make sure that the Device State
field is set to on
for all devices, and save the mcf
file.
In our first example, the host, sharefs-client1
, has access to the metadata devices for the file system, so the mcf
file ends up looking like this:
[sharefs-client1
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on shared/dev/dsk/c1t0d0s0
301 mm sharefs on /dev/dsk/c1t3d0s0 302 mr sharefs on /dev/dsk/c1t3d0s1 303 mr sharefs on:wq
[sharefs-client1]root@solaris:~#
In the second example, the host, sharefs-client2
, does not have access to the metadata devices for the file system, so the mcf
file starts ends up like this:
[sharefs-client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on sharednodev
301 mm sharefs on /dev/dsk/c4t3d0s0 302 mr sharefs on /dev/dsk/c4t3d0s1 303 mr sharefs on:wq
[sharefs-client2]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on sharefs-client1
:
[sharefs-client1]root@solaris:~# sam-fsd
At this point, if your hosts are configured with multiple network interfaces, you may want to Use Local Hosts Files to Route Network Communications.
For each client, proceed as follows:
On the Solaris client, log in as root
.
[sharefs-client1]root@solaris:~#
Back up the operating system's /etc/vfstab
file.
[sharefs-client1]root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Open the /etc/vfstab
file in a text editor, and add a line for the shared file system.
In the example, we open the file in the vi
text editor and add a line for the sharefs
family set device:
[sharefs-client1]root@solaris:~# vi/etc/vfstab
#File #Device Device Mount System fsck Mount Mount #to Mount to fsck Point Type Pass at Boot Options #-------- ------- -------- ------ ---- ------- ------------------------- /devices - /devices devfs - no - /proc - /proc proc - no - ...sharefs - /sharefs samfs - no
Add any other desired mount options using commas as separators, and make any other desired changes to the /etc/vfstab
file. Then save the /etc/vfstab
file.
In the example, we add no mount options.
#File
#Device Device Mount System fsck Mount Mount
#to Mount to fsck Point Type Pass at Boot Options
#-------- ------- -------- ------ ---- ------- -------------------------
/devices - /devices devfs - no -
/proc - /proc proc - no -
...
sharefs - /sharefs samfs - no -
:wq
[sharefs-client1]root@solaris:~#
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /sharefs
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[sharefs-client1]root@solaris:~#mkdir /sharefs
[sharefs-client1]root@solaris:~#chmod 755 /sharefs
Mount the shared file system:
[sharefs-client1]root@solaris:~# mount /sharefs
If the shared file system includes Linux clients, Create the Shared File System on the Linux Clients.
If you are configuring a SAM-QFS shared archiving file system, go to your next task, "Configuring Archival Storage for a Shared File System".
Otherwise, stop here. You have configured the SAM-QFS shared file system.
For each client, proceed as follows:
On the Linux client, log in as root
.
[sharefs-clientL][root@linux ~]#
In a terminal window, retrieve the configuration information for the shared file system using the samfsconfig
device-path
command, where device-path
is the location where the command should start to search for file-system disk devices (such as /dev/*
).
Since Linux hosts do not have access to the metadata devices for the file system, the samfsconfig
cannot find the metadata devices and thus cannot fit the SAM-QFS devices that it discovers into the file-system configuration. The command output lists Ordinal 0
—the metadata device—under Missing Slices
, fails to include the line that identifies the file-system family set, and comments out the listings for the data devices.
In our example, the samfsconfig
output for Linux host sharefs-clientL
looks like this:
[sharefs-clientL][root@linux ~]#samfsconfig /dev/*
# Family Set 'sharefs' Created Thu Feb 21 07:17:00 2013 ## Missing slices
# Ordinal 0
#
/dev/sda4 302 mr sharefs -#
/dev/sda5 303 mr sharefs -
Copy the entries for the shared file system from the samfsconfig
output. Then, in a second window, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and paste the copied entries into the file.
In the example, the mcf
file for the Linux the host, sharefs-clientL
, starts out looking like this:
[sharefs-clientL][root@linux ~]# vi/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ ---------------# /dev/sda4 302 mr sharefs -
# /dev/sda5 303 mr sharefs -
In the mcf
file, insert a line for the shared file system, and include the shared
parameter.
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#------------------ --------- --------- --------- ------ ---------------
sharefs 300 ma sharefs - shared
# /dev/sda4 302 mr sharefs -
# /dev/sda5 303 mr sharefs -
In the mcf
file, insert lines for the file system's metadata devices. Since the Linux host does not have access to metadata devices, set the Equipment Identifier
field to nodev
(no device) and then set the remaining fields to exactly the same values as they have on the metadata server:
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on sharednodev
301 mm sharefs on
# /dev/sda4 302 mr sharefs - # /dev/sda5 303 mr sharefs -
In the mcf
file, uncomment the entries for the data devices.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefs on shared nodev 301 mm sharefs on/dev/sda4 302 mr sharefs -
/dev/sda5 303 mr sharefs -
Make sure that the Device State
field is set to on
for all devices, and save the mcf
file.
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #------------------ --------- --------- --------- ------ --------------- sharefs 300 ma sharefson
shared nodev 301 mm sharefson
/dev/sda4 302 mr sharefson
/dev/sda5 303 mr sharefson
:wq
[sharefs-clientL][root@linux ~]#
Check the mcf
file for errors by running the sam-fsd
command, and correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we check the mcf
file on the Linux client, sharefs-clientL
:
[sharefs-clientL][root@linux ~]# sam-fsd
For each client, proceed as follows:
On the Linux client, log in as root
.
[sharefs-clientL][root@linux ~]#
Back up the operating system's /etc/fstab
file.
[sharefs-clientL][root@linux ~]# cp /etc/fstab /etc/fstab.backup
Open the /etc/fstab
file in a text editor, and start a line for shared file system.
In the example, after backing up the /etc/fstab
file on sharefs-clientL
, we open the file in the vi
text editor and add a line for the sharefs
family set device:
[sharefs-clientL][root@linux ~]# vi/etc/fstab
#File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaultssharefs /sharefs samfs
In the fourth column of the file, add the mandatory shared
mount option.
#File
#Device Mount System Mount Dump Pass
#to Mount Point Type Options Frequency Number
#-------- ------- -------- ------------------------- --------- ------
...
/proc /proc proc defaults
sharefs /sharefs samfs shared
In the fourth column of the file, add any other desired mount options using commas as separators.
Linux clients support the following additional mount options:
rw
, ro
retry
meta_timeo
rdlease
, wrlease
, aplease
minallocsz
, maxallocsz
noauto
, auto
In the example, we add the option noauto
:
#File
#Device Mount System Mount Dump Pass
#to Mount Point Type Options Frequency Number
#-------- ------- -------- ------------------------- --------- ------
...
/proc /proc proc defaults
sharefs /sharefs samfs shared,noauto
Enter zero (0
) in each of the two remaining columns in the file. Then save the /etc/fstab
file.
#File #Device Mount System Mount Dump Pass #to Mount Point Type Options Frequency Number #-------- ------- -------- ------------------------- --------- ------ ... /proc /proc proc defaults sharefs /sharefs samfs shared,noauto0 0
:wq
[sharefs-clientL][root@linux ~]#
Create the mount point specified in the /etc/fstab
file, and set the access permissions for the mount point.
The mount-point permissions must be the same as on the metadata server and on all other clients. Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /sharefs
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[sharefs-clientL][root@linux ~]#mkdir /sharefs
[sharefs-clientL][root@linux ~]#chmod 755 /sharefs
Mount the shared file system. Use the command mount
mountpoint
, where mountpoint
is the mount point specified in the /etc/fstab
file.
As the example shows, the mount
command generates a warning. This is normal and can be ignored:
[sharefs-clientL][root@linux ~]# mount /sharefs
Warning: loading SUNWqfs will taint the kernel: SMI license
See http://www.tux.org/lkml/#export-tainted for information
about tainted modules. Module SUNWqfs loaded with warnings
If you are configuring a SAM-QFS shared archiving file system, go to your next task, "Configuring Archival Storage for a Shared File System"
Otherwise, stop here. You have configured the SAM-QFS shared file system.
Individual hosts do not require local hosts files. The file system identifies the active metadata server and the network interfaces of active and potential metadata servers for all file system hosts (see "Create a Hosts File on the Active Metadata Server"). But local hosts files can be useful when you need to selectively route network traffic between file-system hosts that have multiple network interfaces.
Each file-system host identifies the network interfaces for the other hosts by first checking the /etc/opt/SUNWsamfs/hosts.
family-set-name
file on the metadata server. Then it checks for its own, specific /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
file. If there is no local hosts file, the host uses the interface addresses specified in the global hosts file in the order specified in the global file. But if there is a local hosts file, the host compares it with the global file and uses only those interfaces that are listed in both files in the order specified in the local file. By using different addresses in each file, you can thus control the interfaces used by different hosts.
To configure local hosts files, use the procedure outlined below:
On the metadata server host and each potential metadata server host, create a copy of the file system's global hosts file, /etc/opt/SUNWsamfs/hosts.
family-set-name
, as described in "Create a Hosts File on the Active Metadata Server".
For the examples in this section, the shared file system, sharefs2nic
, includes an active metadata server, sharefs2-mds
, and a potential metadata server, sharefs2-mds_alt
, each with two network interfaces. There are also two clients, sharefs2-client1
andsharefs2-client2
.
We want the active and potential metadata servers to communicate with each other via private network addresses and with the clients via hostnames that Domain Name Service (DNS) can resolve to addresses on the public, local area network (LAN). So /etc/opt/SUNWsamfs/hosts.sharefs2
, the file system's global host file, specifies a private network address in the Network Interface
field of the entries for the active and potential servers and a hostname for the interface address of each client:
[sharefs2-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/
hosts.sharefs2
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #--------------- ----------------- ------- --- ----------sharefs2-mds 172.16.0.129
1 0 serversharefs2-mds_alt 172.16.0.130
2 0 sharefs2-client1sharefs2-client1
0 0 sharefs2-client2sharefs2-client2
0 0:wq
[sharefs2-mds]root@solaris:~#
Create a local hosts file on each of the active and potential metadata servers, using the path and file name /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the equipment identifier of the shared file system. Only include interfaces for the networks that you want the active and potential servers to use.
In our example, we want the active and potential metadata servers to communicate with each other over the private network, so the local hosts file on each server, hosts.sharefs2.local
, lists only private addresses for active and potential servers:
[sharefs2-mds
]root@solaris:~#vi
/etc/opt/SUNWsamfs/
hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2 # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #--------------- ----------------- ------- --- ---------- sharefs2-mds172.16.0.129
1 0 server sharefs2-mds_alt172.16.0.130
2 0:wq
[sharefs2-mds]root@solaris:~#ssh
root@sharefs2-mds_alt
Password:
[sharefs2-mds_alt
]root@solaris:~# vi /etc/opt/SUNWsamfs/hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #--------------- ----------------- ------- --- ---------- sharefs2-mds172.16.0.129
1 0 server sharefs2-mds_alt172.16.0.130
2 0:wq
[sharefs2-mds_alt]root@solaris:~#exit
[sharefs2-mds]root@solaris:~#
Create a local hosts file on each of the clients, using the path and file name /evi /etc/opt/SUNWsamfs/hosts.
family-set-name
.local
, where family-set-name
is the equipment identifier of the shared file system. Only include interfaces for the networks that you want the clients to use.
In our example, we want the clients to communicate with the server only via the public network. So the file includes only the hostnames of the active and potential metadata servers:
[sharefs2-mds]root@solaris:~#ssh
root@sharefs2-client1
Password: [sharefs2-client1
]root@solaris:~#vi
/etc/opt/SUNWsamfs/
hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #--------------- ----------------- ------- --- ---------- sharefs2-mdssharefs2-mds
1 0 server sharefs2-mds_altsharefs2-mds_alt
2 0 [sharefs2-client1]root@solaris:~#exit
[sharefs2-mds]root@solaris:~#ssh
root@sharefs2-client2
Password:
[sharefs2-client2
]root@solaris:~#vi
/etc/opt/SUNWsamfs/
hosts.sharefs2.local
# /etc/opt/SUNWsamfs/hosts.sharefs2.local # Server On/ Additional #Host Name Network Interface Ordinal Off Parameters #--------------- ----------------- ------- --- ---------- sharefs2-mdssharefs2-mds
1 0 server sharefs2-mds_altsharefs2-mds_alt
2 0 [sharefs2-client2]root@solaris:~#exit
[sharefs2-mds]root@solaris:~#
If you started this procedure while finishing the configuration of the server, go to "Mount the Shared File System on the Active Server".
If you started this procedure while configuring a client, you should now "Mount the Shared File System on the Solaris Clients".
To set up the archival storage for an archiving SAM-QFS shared file system, carry out the following tasks:
Connect Tape Drives to Server and Datamover Hosts Using Persistent Bindings
Configure the Hosts of an Archiving File System to Use the Archival Storage
Distribute Tape I/O Across the Hosts of the Shared Archiving File System (if required).
In a shared archiving file system, all potential metadata servers must have access to the library and tape drives. If you decide to Distribute Tape I/O Across the Hosts of the Shared Archiving File System, one or more clients will also need access to drives. So you must configure each of these hosts to address each of the drives in a consistent way.
The Solaris operating system attaches drives the system device tree in the order in which it discovers the devices at startup. This order may or may not reflect the order in which devices are discovered by other file system hosts or the order in which they are physically installed in the removable media library. So you need to persistently bind the devices to each host in the same way that they are bound to the other hosts and in the same order in which they are installed in the removable media library.
The procedure below outlines the required steps (or full information on creating persistent bindings, see the devfsadm
and devlinks
man pages and the administration documentation for your version of the Solaris operating system):
Log in to the active metadata server as root
.
[sharefs-mds]
root@solaris:~#
If you do not know the current physical order of the drives in the library, create a mapping file as described in "Determine the Order in Which Drives are Installed in the Library".
In the example, the device-mappings.txt
file looks like this:
LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ------------- --------------------------------------------------------- 2 /dev/rmt/0cbn -> ../../devices/pci@8,.../st@w500104f00093c438,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8,.../st@w500104f0008120fe,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8,.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8,.../st@w500104f000b6d98d,0:cbn
Open the /etc/devlink.tab
file in a test editor.
In the example, we use the vi
editor:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
# Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. # This is the table used by devlinks # Each entry should have 2 fields; but may have 3. Fields are separated # by single tab ('\t') characters. ...
Using the device-mappings.txt
file as a guide, add a line to the /etc/devlink.tab
file that remaps a starting node in the Solaris tape device tree, rmt/
node-number
, to the first drive in the library. The line should be in the form type=ddi_byte:tape;
addr=
device_address
,0;
rmt/
node-number
\M0
, where device_address
is the physical address of the device and node-number
is the device's position in the Solaris device tree. Choose a node number that is high enough to avoid conflicts with any devices that Solaris configures automatically (Solaris starts from node 0
).
In the example, we note the device address for the first device in the library, 1
, w500104f0008120fe
, and see that the device is currently is currently attached to the host at rmt/1
:
[sharefs-mds]vi
/root/device-mappings.txt
LIBRARY SOLARIS SOLARIS DEVICE LOGICAL PHYSICAL NUMBER DEVICE DEVICE ------- ----------- --------------------------------------------------------- 2 /dev/rmt/0
cbn -> ../../devices/pci@8,.../st@w500104f00093c438,0:cbn1
/dev/rmt/1cbn -> ../../devices/pci@8,.../st@w500104f0008120fe
,0:cbn 3 /dev/rmt/2cbn -> ../../devices/pci@8,.../st@w500104f000c086e1,0:cbn 4 /dev/rmt/3cbn -> ../../devices/pci@8,.../st@w500104f000b6d98d,0:cbn
So we create a line in /etc/devlink.tab
that remaps rmt/60
to the number 1
drive in the library, w500104f0008120fe
:
[sharefs-mds]root@solaris:~#vi
/etc/devlink.tab
# Copyright (c) 1993, 2011, Oracle and/or its affiliates. All rights reserved. ... type=ddi_byte:tape;addr=w500104f0008120fe
,0;rmt/60
\M0 :w
Continue to add lines to the /etc/devlink.tab
file for each tape device that is assigned for SAM-QFS archiving, so that the drive order in the device tree on the metadata server matches the installation order on the library. Save the file.
In the example, we note the order and addresses of the three remaining devices—library drive 2
at w500104f00093c438
, library drive 3
at w500104f000c086e1
, and library drive 4
at w500104f000c086e1
:
[sharefs-mds]root@solaris:~# vi /root/device-mappings.txt ...2
/dev/rmt/0cbn -> ../../devices/pci@8,.../st@w500104f00093c438
,0:cbn 1 /dev/rmt/1cbn -> ../../devices/pci@8,.../st@w500104f0008120fe,0:cbn3
/dev/rmt/2cbn -> ../../devices/pci@8,.../st@w500104f000c086e1
,0:cbn4
/dev/rmt/3cbn -> ../../devices/pci@8,.../st@w500104f000b6d98d
,0:cbn
Then we map the device addresses to next three Solaris device nodes, maintaining the same order as in the library:
[sharefs-mds]root@solaris:~# vi /etc/devlink.tab ... type=ddi_byte:tape;addr=w500104f0008120fe,0; rmt/60\M0 type=ddi_byte:tape;addr=w500104f00093c438
,0;rmt/61
\M0 type=ddi_byte:tape;addr=w500104f000c086e1
,0;rmt/62
\M0 type=ddi_byte:tape;addr=w500104f000b6d98d
,0;rmt/63
\M0:wq
[sharefs-mds]root@solaris:~#
Delete all existing links to the tape devices in /dev/rmt
.
[sharefs-mds]root@solaris:~# rm /dev/rmt/*
Create new, persistent tape-device links from the entries in the /etc/devlink.tab
file. Use the command devfsadm -c tape
.
Each time that the devfsadm
command runs, it creates new tape device links for devices specified in the /etc/devlink.tab
file using the configuration specified by the file. The -c tape
option restricts the command to creating new links for tape-class devices only:
[sharefs-mds]root@solaris:~# devfsadm -c tape
Add the same lines to the /etc/devlink.tab
file, delete the links in /dev/rmt
, and run devfsadm -c tape
on each potential metadata server and datamover in the shared file system configuration.
In the example, we have a potential metadata server, sharefs-mds_alt
, and a datamover client, sharefs-client1
. So we edit the /etc/devlink.tab
files on each to match that on the active server, sharefs-mds
. Then we delete the existing links in /dev/rmt
on sharefs-mds_alt
and sharefs-client1
, and run devfsadm -c tape
on each:
[sharefs-mds_alt
]root@solaris:~#vi
/etc/devlink.tab
... type=ddi_byte:tape;addr=w500104f0008120fe
,0;rmt/60
\M0 type=ddi_byte:tape;addr=w500104f00093c438
,0;rmt/61
\M0 type=ddi_byte:tape;addr=w500104f000c086e1
,0;rmt/62
\M0 type=ddi_byte:tape;addr=w500104f000b6d98d
,0;rmt/63
\M0:wq
[sharefs-mds_alt]root@solaris:~#rm
/dev/rmt/*
[sharefs-mds_alt]root@solaris:~#devfsadm
-c
tape
[sharefs-mds_altroot@solaris:~#ssh
sharefs-client1
Password: [sharefs-client1
]root@solaris:~#vi
/etc/devlink.tab
... type=ddi_byte:tape;addr=w500104f0008120fe
,0;rmt/60
\M0 type=ddi_byte:tape;addr=w500104f00093c438
,0;rmt/61
\M0 type=ddi_byte:tape;addr=w500104f000c086e1
,0;rmt/62
\M0 type=ddi_byte:tape;addr=w500104f000b6d98d
,0;rmt/63
\M0:wq
[sharefs-client1]root@solaris:~#rm
/dev/rmt/*
[sharefs-client1]root@solaris:~#devfsadm
-c
tape
[sharefs-client1]
Now, go to "Configuring a File System Metadata Server for Sharing".
For the active metadata server and each potential metadata server and datamover client, proceed as follows:
Log in to the host as root
.
[sharefs-host]root@solaris:~#
Open the /etc/opt/SUNWsamfs/mcf
file in a text editor.
In the example, we use the vi
editor.
[sharefs-host]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
sharefs 100 ms sharefs on /dev/dsk/c1t3d0s3 101 md sharefs on /dev/dsk/c1t3d0s4 102 md sharefs on ...
Following the file system definitions in the /etc/opt/SUNWsamfs/mcf
file, start a section for the archival storage equipment.
In the example, we add some headings for clarity:
[sharefs-host]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
... # Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ----------------
To add archival tape storage, start by adding an entry for the library. In the equipment identifier field, enter the device ID for the library and assign an equipment ordinal number:
In this example, the library equipment identifier is /dev/scsi/changer/c1t0d5
. We set the equipment ordinal number to 900
, the range following the range chosen for our disk archive:
# Archival storage for copies:
#
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- --------- ------ ----------------
/dev/scsi/changer/c1t0d5 900
Set the equipment type to rb
, a generic SCSI-attached tape library, provide a name for the tape library family set, and set the device state on
.
In this example, we are using the library library1
:
# Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ---------------- /dev/scsi/changer/c1t0d5 900rb
library1
on
In the Additional Parameters
column, enter the path where the library catalog will be stored.
Note that, due to document layout limitations, the example abbreviates the long path to the library catalog var/opt/SUNWsamfs/catalog/library1cat
:
# Archival storage for copies:
#
# Equipment Equipment Equipment Family Device Additional
# Identifier Ordinal Type Set State Parameters
#----------------------- --------- --------- --------- ------ ----------------
/dev/scsi/changer/c1t0d5 900 rb library1 on .../library1cat
Next, add an entry for each tape drive using the persistent equipment identifiers that we established in the procedure "Connect Tape Drives to Server and Datamover Hosts Using Persistent Bindings".
# Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- -------- ------ ----------------- DISKVOL1 800 ms DISKVOL1 on /dev/dsk/c6t0d1s7 801 md DISKVOL1 on /dev/dsk/c4t0d2s7 802 md DISKVOL1 on /dev/scsi/changer/c1t0d5 900 rb library1 on .../library1cat/dev/rmt/60cbn
901
tplibrary1
on
/dev/rmt/61cbn
902
tp
library1
on
/dev/rmt/62cbn
903
tp
library1
on
/dev/rmt/63cbn
904
tp
library1
on
Finally, if you wish to configure a SAM-QFS historian yourself, add an entry using the equipment type hy
. Enter a hyphen in the family-set and device-state columns and enter the path to the historian's catalog in additional-parameters column.
The historian is a virtual library that catalogs volumes that have been exported from the archive. If you do not configure a historian, the software creates one automatically using the highest specified equipment ordinal number plus one.
Note that the example abbreviates the path to the historian catalog for page-layout reasons. The full path is /var/opt/SUNWsamfs/catalog/historian_cat
:
# Archival storage for copies: # # Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- --------- ------ ---------------- /dev/scsi/changer/c1t0d5 900 rb library1 on ...catalog/library1cat /dev/rmt/60cbn 901 tp library1 on /dev/rmt/61cbn 902 tp library1 on /dev/rmt/62cbn 903 tp library1 on /dev/rmt/63cbn 904 tp library1 onhistorian
999
hy
-
-
.../historian_cat
Save the mcf
file, and close the editor.
...
/dev/rmt/3cbn 904 tp library1 on
historian 999 hy - - .../historian_cat
:wq
[sharefs-host]root@solaris:~#
Check the mcf
file for errors by running the sam-fsd
command. Correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error:
[sharefs-host]root@solaris:~# sam-fsd
Tell the SAM-QFS service to reread the mcf
file and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.
[sharefs-host]root@solaris:~# samd config
Repeat this procedure until all active and potential metadata servers and all datamover clients have been configured to use the archival storage.
Starting with SAM-QFS Release 5.4, any client of a shared archiving file system that runs on Oracle Solaris 11 or higher can attach tape drives and carry out tape I/O on behalf of the file system. Distributing tape I/O across these datamover hosts greatly reduces server overhead, improves file-system performance, and allows significantly more flexibility when scaling SAM-QFS implementations. As your archiving needs increase, you now have the option of either replacing SAM-QFS metadata servers with more powerful systems (vertical scaling) or spreading the load across more clients (horizontal scaling).
To distribute tape I/O across shared file-system hosts, proceed as follows:
Connect all devices that will be used for distributed I/O to the file system metadata server and to all file system clients that will handle tape I/O.
If you have not already done so, for each client that will serve as a datamover, Connect Tape Drives to Server and Datamover Hosts Using Persistent Bindings. Then return here.
Log in to the shared archiving file system's metadata server as root
.
In the example, the server's hostname is samsharefs-mds
:
[samsharefs-mds]root@solaris:~#
Make sure that the metadata server is running Oracle Solaris 11 or higher.
[samsharefs-mds]root@solaris:~#uname -r
5.11
[samsharefs-mds]root@solaris:~#
Make sure that all clients that serve as datamovers are running Oracle Solaris 11 or higher.
In the example, we log in to client hosts samsharefs-client1
and samsharefs-client2
remotely using ssh
and get the Solaris version from the log-in banner:
[samsharefs-mds]root@solaris:~#ssh root@samsharefs-client1
Password: Oracle Corporation SunOS5.11
11.1 September 2013 [samsharefs-client1
]root@solaris:~#exit
[samsharefs-mds]root@solaris:~#ssh root@samsharefs-client2
Password: Oracle Corporation SunOS5.11
11.1 September 2013 [samsharefs-client2
]root@solaris:~#exit
[samsharefs-mds]root@solaris:~#
On the metadata server, open the /etc/opt/SUNWsamfs/defaults.conf
in a text editor. Uncomment the line #distio = off
, if necessary, or add it if it is not present at all.
By default, distio
is off
(disabled).
In the example, we open the file in the vi
editor and add the line distio = on
:
[samsharefs-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/defaults.conf
# These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ...distio = on
Next, select the device types that should participate in distributed I/O. To use device type dev
with distributed I/O, add the line dev
_distio = on
to the defaults.conf
file. To exclude device type dev
from distributed I/O, add the line dev
_distio = off
. Save the file.
By default, StorageTek T10000 drives and LTO drives are allowed to participate in distributed I/O (ti_distio = on
and li_distio = on
), while all other types are excluded. In the example, we exclude LTO drives:
[samsharefs-mds]root@solaris:~# vi /etc/opt/SUNWsamfs/defaults.conf # These are the defaults. To change the default behavior, uncomment the # appropriate line (remove the '#' character from the beginning of the line) # and change the value. ... distio = onli_distio = off
:wq
[samsharefs-mds]root@solaris:~#
On each client that will serve as a datamover, edit the defaults.conf
file so that it matches the file on the server.
On each client that will serve as a datamover, open the /etc/opt/SUNWsamfs/mcf
file in a text editor, and update the file to include all of the tape devices that the metadata server is using for distributed tape I/O. Make sure that the device order and equipment numbers are identical to those in the mcf
file on the metadata server.
In the example, we use the vi
editor to configure the mcf
file on host samsharefs-client1
:
[samsharefs-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ -------------- samsharefs 800 ms samsharefs on ... # Archival storage for copies:/dev/rmt/60cbn 901 ti
on/dev/rmt/61cbn 902 ti
on/dev/rmt/62cbn 903 ti
on/dev/rmt/63cbn 904 ti
on
If the tape library listed in the /etc/opt/SUNWsamfs/mcf
file on the metadata server is configured on the client that will serve as a datamover, specify the library family set as the family set name for the tape devices that are being used for distributed tape I/O. Save the file.
In the example, the library is configured on host samsharefs-client1
, so we use the family set name library1
for the tape devices
[samsharefs-client1]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ -------------- samsharefs 800 ms samsharefs on ... # Archival storage for copies:/dev/scsi/changer/c1t0d5 900 rb library1 on .../library1cat
/dev/rmt/60cbn 901 ti library1 on /dev/rmt/61cbn 902 ti library1 on /dev/rmt/62cbn 903 ti library1 on /dev/rmt/63cbn 904 ti library1 on:wq
[samsharefs-client1]root@solaris:~#
If the tape library listed in the /etc/opt/SUNWsamfs/mcf
file on the metadata server is not configured on the client that will serve as a datamover, use a hyphen (-
) as the family set name for the tape devices that are being used for distributed tape I/O. Then save the file and close the editor.
In the example, the library is not configured on host samsharefs-client2
, so we use the hyphen as the family set name for the tape devices:
[samsharefs-client2]root@solaris:~#vi
/etc/opt/SUNWsamfs/mcf
# Equipment Equipment Equipment Family Device Additional # Identifier Ordinal Type Set State Parameters #----------------------- --------- --------- ---------- ------ -------------- samsharefs 800 ms samsharefs on ... # Archival storage for copies: /dev/rmt/60cbn 901 ti-
on /dev/rmt/61cbn 902 ti-
on /dev/rmt/62cbn 903 ti-
on /dev/rmt/63cbn 904 ti-
on:wq
[samsharefs-client2]root@solaris:~#
If you need to enable or disable distributed tape I/O for particular archive set copies, log in to the server, open the /etc/opt/SUNWsamfs/archiver.cmd
file in a text editor, and add the -distio
parameter to the copy directive. Set -distio
on
to enable distributed I/O or -distio
off
to disable it. Save the file.
In the example, we log in to the server samsharefs-mds
and use the vi
editor to turn distributed I/O off
for copy 1
:
[samsharefs-mds]root@solaris:~#vi
/etc/opt/SUNWsamfs/archiver.cmd
# archiver.cmd ... params allsets -sort path -offline_copy stageahead allsets.1 -startage 10m -startsize 500M -startcount 500000-distio off
allsets.2 -startage 24h -startsize 20G -startcount 500000 -reserve set:wq
[samsharefs-mds]root@solaris:~#
Check the configuration files for errors by running the sam-fsd
command. Correct any errors found.
The sam-fsd
command reads SAM-QFS configuration files and initializes file systems. It will stop if it encounters an error. In the example, we run the command on the server, sharefs-mds
:
[sharefs-mds]root@solaris:~# sam-fsd
Tell the SAM-QFS service to read the modified configuration files and reconfigure itself accordingly. Correct any errors reported and repeat as necessary.
[sharefs-mds]root@solaris:~# samd config
To verify that distributed I/O has been successfully activated, use the command samcmd
g
. If the DATAMOVER
flag appears in the output for the clients, distributed I/O has been successfully activated.
In the example, the flag is present:
[samsharefs-mds]root@solaris:~#samcmd
g
Shared clients samcmd 5.4.dist_tapeio 11:09:13 Jul 2 2014 samcmd on samsharefs-mds samsharefs is shared, server is samsharefs-mds, 2 clients 3 max ord hostname seqno nomsgs status config conf1 flags 1 samsharefs-mds 14 0 8091 808540d 4051 0 MNT SVR config : CDEVID ARCHIVE_SCAN GFSID OLD_ARCHIVE_FMT " : SYNC_META TRACE SAM_ENABLED SHARED_MO config1 : NFSV4_ACL MD_DEVICES SMALL_DAUS SHARED_FS flags : status : MOUNTED SERVER SAM DATAMOVER last_msg : Wed Jul 2 10:13:50 2014 2samsharefs-client1
127 0 a0a1 808540d 4041 0 MNT CLI config : CDEVID ARCHIVE_SCAN GFSID OLD_ARCHIVE_FMT " : SYNC_META TRACE SAM_ENABLED SHARED_MO config1 : NFSV4_ACL MD_DEVICES SHARED_FS flags : status : MOUNTED CLIENT SAM SRVR_BYTEREV " :DATAMOVER
...
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".
Multiple hosts can access SAM-QFS file systems using Network File System (NFS) or Server Message Block (SMB)/Common Internet File System (CIFS) shares in place of or in addition to the SAM-QFS software's native support for multiple-host file-system access (see "Accessing File Systems from Multiple Hosts Using SAM-QFS Software"). The following sections outline the basic configuration steps:
Carry out the following tasks:
Disable Delegation Before Using NFS 4 to Share a SAM-QFS Shared File System
Configure NFS Servers and Clients to Share SAM-QFS WORM Files and Directories, if necessary.
If you use NFS to share a SAM-QFS shared file system, you need to make sure that the SAM-QFS software controls access to files without interference from NFS. This is not generally a problem, because, when the NFS server accesses files on behalf of its clients, it does so as a client of the SAM-QFS shared file system. Problems can arise, however, if NFS version-4 servers are configured to delegate control over read and write access to their clients. Delegation is attractive because the server only needs to intervene to head off potential conflicts. The server's workload is partially distributed across the NFS clients, and network traffic is reduced. But delegation grants access—particularly write access—independently of the SAM-QFS server, which also controls access from its own shared file-system clients. To prevent conflicts and potential file corruption, you need to disable delegation. Proceed as follows.
Log in to the host of the SAM-QFS file system that you want to share using NFS. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfsnfs
.
[qfsnfs]root@solaris:~#
If you are using NFS version 4 and the NFS server runs Solaris 11.1 or later, use the sharectl set -p
command of the Service Management Facility (SMF) to turn the NFS server_delegation
property off
.
[qfsnfs]root@solaris:~# sharectl set -p server_delegation=off
If you are using NFS version 4 and the NFS server runs Solaris 11.0 or earlier, disable delegations by opening the /etc/default/nfs
file in a text editor and setting the NFS_SERVER_DELEGATION
parameter off
. Save the file.
In the example, we use the vi
editor:
[qfsnfs]root@solaris:~#vi
/etc/default/nfs
# ident "@(#)nfs 1.10 04/09/01 SMI" # Copyright 2004 Sun Microsystems, Inc. All rights reserved. # Use is subject to license terms. ...NFS_SERVER_DELEGATION=off
:wq
[qfsnfs]root@solaris:~#
If the SAM-QFS file system that you intend to share supports the Write-Once Read-Many (WORM) feature, you should Configure NFS Servers and Clients to Share SAM-QFS WORM Files and Directories now.
Otherwise, Configure the NFS Server on the SAM-QFS Host.
Log in to the host of the SAM-QFS file system that you want to share using NFS. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfsnfs
and the client name is nfsclient1
.
[qfsnfs]root@solaris:~#
If the SAM-QFS file system that you intend to share uses the WORM feature and is hosted on a server running under Oracle Solaris 10 or later, make sure that NFS version 4 is enabled on the NFS server and on all clients.
In the example, we check the server qfsnfs
and the client nfsclient1
. In each case, we first check the Solaris version level using the uname -r
command. Then we pipe the output of the modinfo
command to grep
and a regular expression that find the NFS version information:
[qfsnfs]root@solaris:~#uname
-r
5.11 [qfsnfs]root@solaris:~#modinfo | grep -i "nfs.* version 4"
258 7a600000 86cd0 28 1 nfs (network filesystem version 4) [qfsnfs]root@solaris:~#ssh
root@nfsclient1
Pasword: ... [nfsclient1]root@solaris:~#uname
-r
5.11 [nfsclient1]root@solaris:~#modinfo | grep -i "nfs.* version 4"
278 fffffffff8cba000 9df68 27 1 nfs (network filesystem version 4) [nfsclient1]root@solaris:~#exit
[qfsnfs]root@solaris:~#
If NFS version 4 is not enabled on a server running under Oracle Solaris 10 or later, log in as root
on the server and on each client. Then use the sharectl set
command to enable NFS 4:
[qfsnfs]root@solaris:~#sharectl set -p server_versmax=4 nfs
[qfsnfs]root@solaris:~#ssh
root@nfsclient1
Password ... [nfsclient1]root@solaris:~#sharectl set -p server_versmax=4 nfs
[nfsclient1]root@solaris:~#exit
[qfsnfs]root@solaris:~#
Before clients can successfully mount a SAM-QFS file system using Network File System (NFS), you must configure the NFS server so that it does not attempt to share the SAM-QFS file system before the file system has been successfully mounted on the host. Under Oracle Solaris 10 and subsequent versions of the operating system, the Service Management Facility (SMF) manages mounting of file systems at boot time. If you do not configure NFS using the procedure below, either the QFS mount or the NFS share will succeed and the other will fail.
Log in to the host of the SAM-QFS file system that you want to share using NFS. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfsnfs
.
[qfsnfs]root@solaris:~#
Export the existing NFS configuration to an XML manifest file by redirecting the output of the svccfg export /network/nfs/server
command.
In the example, we direct the exported configuration to the manifest file /var/tmp/server.xml
:
[qfsnfs]root@solaris:~# svccfg export /network/nfs/server > /var/tmp/server.xml
[qfsnfs]root@solaris:~#
Open the manifest file in a text editor, and locate the filesystem-local
dependency.
In the example, we open the file in the vi
editor. The entry for the filesystem-local
dependency is listed immediately before the entry for the dependent nfs-server_multi-user-server
:
[qfsnfs]root@solaris:~#vi
/var/tmp/server.xml
<?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='manifest' name='export'> <service name='network/nfs/server' type='service' version='0'> ... <dependency name='filesystem-local' grouping='require_all' restart_on='error' type='service'> <service_fmri value='svc:/system/filesystem/local'/> </dependency> <dependent name='nfs-server_multi-user-server' restart_on='none' grouping='optional_all'> <service_fmri value='svc:/milestone/multi-user-server'/> </dependent> ...
Immediately after the filesystem-local
dependency, add a qfs
dependency that mounts the QFS shared file system. Then save the file, and exit the editor.
This will mount the SAM-QFS shared file system before the server tries to share it via NFS:
<?xml version='1.0'?> <!DOCTYPE service_bundle SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'> <service_bundle type='manifest' name='export'> <service name='network/nfs/server' type='service' version='0'> ... <dependency name='filesystem-local' grouping='require_all' restart_on='error' type='service'> <service_fmri value='svc:/system/filesystem/local'/> </dependency><dependency name='qfs' grouping='require_all' restart_on='error' type='service'>
<service_fmri value='svc:/network/qfs/shared-mount:default'/>
</dependency>
<dependent name='nfs-server_multi-user-server' restart_on='none' grouping='optional_all'> <service_fmri value='svc:/milestone/multi-user-server'/> </dependent>:wq
[qfsnfs]root@solaris:~#
Validate the manifest file using the svccfg validate
command.
[qfsnfs]root@solaris:~# svccfg validate /var/tmp/server.xml
If the svccfg validate
command reports errors, correct the errors and revalidate the file.
In the example, the svccfg validate
command returns XML parsing errors. We inadvertently omitted an ending tag </dependency>
when saving the file. So we re-open the file in the vi
editor and correct the problem:
[qfsnfs]root@solaris:~# svccfg validate /var/tmp/server.xml /var/tmp/server.xml:75: parser error : Opening and ending tag mismatch: dependency line 29 and service </service> ^ /var/tmp/server.xml:76: parser error : expected '>' </service_bundle> ^ /var/tmp/server.xml:77: parser error : Premature end of data in tag service_bundle line 3 ^ svccfg: couldn't parse document [qfsnfs]root@solaris:~#vi
/var/tmp/server.xml
Once the svccfg validate
command completes without error, disable NFS using the svcadm disable nfs/server
command.
In the example, the svccfg validate
command returned no output, so the file is valid and we can disable NFS:
[qfsnfs]root@solaris:~# svccfg validate /var/tmp/server.xml
[qfsnfs]root@solaris:~# svcadm disable nfs/server
Delete the existing NFS server configuration using the svccfg delete nfs/server
command.
[qfsnfs]root@solaris:~# svccfg delete nfs/server
Import the manifest file into the Service Management Facility (SMF) using the svccfg import
command.
[qfsnfs]root@solaris:~# svccfg import /var/tmp/server.xml
Re-enable NFS using the svcadm enable nfs/server
command.
NFS is configured to use the updated configuration.
[qfsnfs]root@solaris:~# svcadm enable nfs/server
Confirm that the qfs
dependency has been applied. Make sure that the command svcs -d svc:/network/nfs/server:default
displays the /network/qfs/shared-mount:default
service:
[qfsnfs]root@solaris:~# svcs -d svc:/network/nfs/server:default
STATE STIME FMRI
...
online Nov_01 svc:/network/qfs/shared-mount:default
...
Share the SAM-QFS file system using the procedures described in the administration documentation for your version of the Oracle Solaris operating system. The steps below summarize the procedure for Solaris 11.1:
Log in to the host of the SAM-QFS file system that you want to share using NFS. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfsnfs
.
[qfsnfs]root@solaris:~#
Enter the command line share -F nfs -o
sharing-options
sharepath
where the -F
switch specifies the nfs
sharing protocol and sharepath
is the path to the shared resource. If the optional -o
parameter is used, sharing-options
is any of the following:
rw
makes sharepath
available with read and write privileges to all clients.
ro
makes sharepath
available with read-only privileges to all clients.
rw=
clients
makes sharepath
available with read and write privileges to clients
, a colon-delimited list of one or more clients that have access to the share.
ro=
clients
makes sharepath
available with read-only privileges to clients
, a colon-delimited list of one or more clients that have access to the share.
In the example, we share the /qfsms
file system read/write with clients nfsclient1
and nfsclient2
and read-only with nfsclient3
:
[qfsnfs]root@solaris:~#share
-F
nfs
-o
rw=
nfsclient1:nfsclient2
\ro=nfsclient3 /qfsms
When you enter the command, the system automatically restarts the NFS server daemon, nfsd
. See the share_nfs
man page for additional options and details.
Check the sharing parameters using the command line share -F nfs
.
In the example, the command output shows that we have correctly configured the share:
[qfsnfs]root@solaris:~# share -F nfs
/qfsms sec=sys,rw=nfsclient1:nfsclient2,ro=nfsclient3
[qfsnfs]root@solaris:~#
Next, Mount the NFS-Shared SAM-QFS File System on the NFS Clients.
Mount the NFS server's file system at a convenient mount point on client systems. For each client, proceed as follows:
Log in to the client as root
.
In the example, the NFS client is named nfsclient1
:
[nfsclient1]root
@solaris:~#
Back up the operating system's /etc/vfstab
file.
[nfsclient1]root@solaris:~# cp /etc/vfstab /etc/vfstab.backup
Open the /etc/vfstab
file in a text editor.
In the example, we use the vi
editor.
[nfsclient1]root@solaris:~#vi
/etc/vfstab
#File Device Mount #Device to Mount System fsck at Mount #to Mount fsck Point Type Pass Boot Options #------------ ------ --------- ------ ---- ----- ---------------- /devices - /devices devfs - no - ...
In the first column of the /etc/vfstab
file, name the file device that you want to mount by specifying the name of the NFS server and the mount point of the file system that you want to share, separated by a colon.
In the example, the NFS server is named qfsnfs
, the shared file system is named qfsms
, and the mount point on the server is /qfsms
:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms
In the second column of the /etc/vfstab
file, enter a hyphen (-
) so that the local system does not try to check the remote file system for consistency:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms -
In the third column of the /etc/vfstab
file, enter the local mount point where you will mount the remote file system.
In the example, the mount point will be the directory /qfsnfs
:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms - /qfsnfs
In the fourth column of the /etc/vfstab
file, enter the file-system type nfs
.
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms - /qfsnfs nfs
We use the nfs
file-system type, because the client mounts the remote QFS file system as an NFS file system.
In the fifth column of the /etc/vfstab
file, enter a hyphen (-
), because the local system is not checking the remote file system for consistency.
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms - /qfsnfs nfs -
In the sixth column of the /etc/vfstab
file, enter yes
to mount the remote file system at boot or no
to mount it manually, on demand.
In the example, we enter yes
:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms - /qfsnfs nfs - yes
In the last column of the /etc/vfstab
file, enter the hard
and intr
NFS mount option to force unlimited, uninterruptable retries or set a specified number of retries by entering the soft
, retrans
, and timeo
mount options with retrans
set to 120
or more and timeo
set to 3000
tenths of a second.
Setting the hard
retry option or specifying the soft
option with a sufficiently long timeout and sufficient numbers of retries keeps NFS requests from failing when the requested files reside on removable volumes that cannot be immediately mounted. See the Solaris mount_nfs
man page for more information on these mount options.
In the example, we enter the soft
mount option:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms - /qfsnfs nfs - yes soft,retrans=120,timeo=3000
If you are using NFS 2, set the rsize
mount parameter to 32768
.
Accept the default value for other versions of NFS.
The rsize
mount parameter sets the read buffer size to 32768
bytes (vs. the default, 8192
bytes). The example shows what an NFS 2 configuration would like:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs2:/qfs2 - /qfsnfs2 nfs - yes ...,rsize=32768
If you are using NFS 2, set the wsize
mount parameter to 32768
.
Accept the default value for other versions of NFS.
The wsize
mount parameter sets the write buffer size to the specified number of bytes (by default, 8192
bytes). The example shows what an NFS 2 configuration would like:
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs2:/qfs2 - /qfsnfs2 nfs - yes ...,wsize=32768
Save the /etc/vfstab
file and exit the editor.
#File Device Mount
#Device to Mount System fsck at Mount
#to Mount fsck Point Type Pass Boot Options
#------------ ------ --------- ------ ---- ----- ----------------
/devices - /devices devfs - no -
...
qfsnfs:/qfsms - /qfsnfs nfs - yes soft,retrans=120,timeo=3000
:wq
[nfsclient1]root@solaris:~#
Create a mount point directory for the shared file system.
In the example, we will mount the shared file system on a directory named /qfsnfs
:
[nfsclient1]root@solaris:~# mkdir /qfsnfs
Create the mount point specified in the /etc/vfstab
file, and set the access permissions for the mount point.
Users must have execute (x
) permission to change to the mount-point directory and access files in the mounted file system. In the example, we create the /qfsnfs
mount-point directory and set permissions to 755
(-rwxr-xr-x
):
[nfsclient1]root@solaris:~#mkdir /qfsnfs
[nfsclient1]root@solaris:~#chmod 755 /qfsnfs
Mount the shared file system:
[nfsclient1]root@solaris:~# mount /qfsnfs
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".
SMB makes SAM-QFS accessible to Microsoft Windows hosts and provides interoperability features, such as case-insensitivity, support for DOS attributes, and support for NFSv4 Access Control Lists (ACLs). The Oracle Solaris OS provides a Server Message Block (SMB) protocol server and client implementation that includes support for numerous SMB dialects including NT LM 0.12 and Common Internet File System (CIFS).
Starting with Release 5.4, SAM-QFS supports Windows Security Identifiers (SIDs). Windows identities no longer need to be explicitly defined using the idmap
service or provided by the Active Directory service.
To configure SMB service with SAM-QFS Release 5.4 file systems, carry out the following tasks:
Review Oracle Solaris SMB Configuration and Administration Documentation.
Explicitly Map Windows Identities for the SMB Server (Optional).
Configure the SMB Server for Windows Active Directory Domains or Workgroups.
The sections below outline the parts of the SMB configuration process as they apply to SAM-QFS file systems. They are not comprehensive and do not cover all possible scenarios. So review the full instructions for configuring Oracle Solaris SMB servers, integrating the servers into an existing Windows environment, and mounting SMB shares on Solaris systems. Full instructions can be found in the volume Managing SMB and Windows Interoperability in Oracle Solaris 11.1 in the Oracle Solaris 11.1 Information Library.
While SAM-QFS now fully supports Windows Security Identifiers (SIDs), explicitly defining the relationships between UNIX identities and SIDs continues to have advantages in some situations. For example, in heterogenous environments where users have both UNIX and Windows identities, you may wish to create explicit mappings using the idmap
service or the Active Directory service. See the Managing SMB and Windows Interoperability in Oracle Solaris 11.1 for full instructions.
SAM-QFS file systems that are shared using SMB/CIFS must use the new Access Control List (ACL) implementation adopted by Network File System (NFS) version 4 and introduced in Oracle Solaris 11. Older versions of Solaris and NFS used ACLs that were based on a POSIX-draft specification that is not compatible with the Windows ACL implementation.
New file systems that you create with SAM-QFS Release 5.4 use NFS version-4 ACLs by default on Solaris 11. But, if you need to share existing SAM-QFS file systems with SMB/CIFS clients, you must convert the existing POSIX-style ACLs using the appropriate procedure:
Convert a SAM-QFS Unshared File System that Uses POSIX-Style ACLs
Convert a SAM-QFS File Shared System that Uses POSIX-Style ACLs
Proceed as follows:
Log in to the host as root
.
In the example, we log in to the host qfs-host
[qfs-host]root@solaris:~#
Make sure that the host runs Oracle Solaris 11.1 or higher. Use the command uname -r
.
[qfs-host]root@solaris:~# uname -r
5.11
[qfs-host]root@solaris:~#
Unmount the file system using the command umount
mount-point
, where mount-point
is the mount point of the SAM-QFS file system.
See the umount_samfs
man page for further details. In the examples below, the server name is qfs-host
and the file system is /qfsms
:
[qfs-host]root@solaris:~# umount /qfsms
Convert the file system using the samfsck -F -A
file-system
command, where the -F
option specifies a check and repair of the file system, the -A
option specifies conversion of the ACLs, and file-system
is the name of the file system that you need to convert.
The -F
option is required when the -A
option is specified. If samfsck -F -A
command returns errors, the process aborts and no ACLs are converted (for full descriptions of these options, see the samfsck
man page).
[qfs-host]root@solaris:~# samfsck -F -A /qfsms
If errors are returned and no ACLs are converted, use the samfsck -F -a
file-system
command to forcibly convert the ACLs.
The -a
option specifies a forced conversion. The -F
option is required when the -a
option is specified (for full descriptions of these options, see the samfsck
man page).
[qfs-host]root@solaris:~# samfsck -F -a /qfsms
Now, Configure the SMB Server for Windows Active Directory Domains or Workgroups.
Log in to the file-system metadata server (MDS) as root
.
In the example, we log in to the metadata server sharedqfs-mds
:
[sharedqfs-mds]root@solaris:~#
Make sure that the metadata server runs Oracle Solaris 11.1 or higher. Use the command uname -r
.
[sharedqfs-mds]root@solaris:~# uname -r
5.11
[sharedqfs-mds]root@solaris:~#
Log in to each SAM-QFS client as root
, and make sure that each client runs Oracle Solaris 11.1 or higher.
In the example, we open terminal windows and remotely log in to client hosts sharedqfs-client1
and sharedqfs-client2
using ssh
to get the Solaris version from the log-in banner:
[sharedqfs-mds]root@solaris:~#ssh root@sharedqfs-client1
Password: Oracle Corporation SunOS 5.11 11.1 September 2013 [sharedqfs-client1]root@solaris:~# [sharedqfs-mds]root@solaris:~#ssh root@sharedqfs-client2
Password: Oracle Corporation SunOS 5.11 11.1 September 2013 [sharedqfs-client2]root@solaris:~#
Unmount the SAM-QFS shared file system from each SAM-QFS client using the command umount
mount-point
, where mount-point
is the mount point of the SAM-QFS file system.
See the umount_samfs
man page for further details. In the example, we unmount /sharedqfs1
from our two clients, sharedqfs-client1
and sharedqfs-client2
:
Oracle Corporation SunOS 5.11 11.1 September 2013 [sharedqfs-client1]root@solaris:~#umount /sharedqfs
[sharedqfs-client1]root@solaris:~# Oracle Corporation SunOS 5.11 11.1 September 2013 [sharedqfs-client2]root@solaris:~#umount /sharedqfs
[sharedqfs-client1]root@solaris:~#
Unmount the SAM-QFS shared file system from the metadata server using the command umount
-o await_clients=
interval
mount-point
, where mount-point
is the mount point of the SAM-QFS file system and interval is the delay in seconds specified by the -o await_clients
option delays execution.
When the umount
command is issued on the metadata server of a SAM-QFS shared file system, the -o await_clients
option makes umount
wait the specified number of seconds so that clients have time to unmount the share. It has no effect if you unmount an unshared file system or issue the command on a SAM-QFS client. See the umount_samfs
man page for further details.
In the example, we unmount the /sharedqfs
file system from the metadata server sharedqfs-mds
while allowing 60
seconds for clients to unmount:
[sharedqfs-mds]root@solaris:~# umount -o await_clients=60 /sharedqfs
Convert the file system from the POSIX-style ACLs to NFS version 4 ACLs. On the metadata server, use the command samfsck -F -A
file-system
, where the -F
option specifies a check and repair of the file system, the -A
option specifies conversion of the ACLs, and file-system
is the name of the file system that you need to convert.
The -F
option is required when the -A
option is specified. If samfsck -F -A
command returns errors, the process aborts and no ACLs are converted (for full descriptions of these options, see the samfsck
man page). In the example, we convert a SAM-QFS file system named /sharedqfs
:
[sharedqfs-mds]root@solaris:~# samfsck -F -A /sharedqfs
If errors are returned and no ACLs are converted, forcibly convert the ACLs. On the metadata server, use the samfsck -F -a
file-system
command.
The -a
option specifies a forced conversion. The -F
option is required when the -a
option is specified (for full descriptions of these options, see the samfsck
man page). In the example, we forcibly convert the SAM-QFS file system named /qfsma
:
[sharedqfs-mds]root@solaris:~# samfsck -F -a /sharedqfs
Now, Configure the SMB Server for Windows Active Directory Domains or Workgroups.
Oracle Solaris SMB services can operate in either of two, mutually exclusive modes: domain or workgroup. Choose one or the other based on your environment and authentication needs:
If you need to give Active Directory domain users access to the Solaris SMB service, Configure the SMB Server in Domain Mode.
If you need to give local Solaris users access to the SMB service and either do not have Active Directory domains or do not need to give Active Directory domain users access to the service, Configure the SMB Server in Workgroup Mode.
Contact the Windows Active Directory administrator and obtain the following information:
the name of the authenticated Active Directory user account that the you need to use when joining the Active Directory domain
the organizational unit that you need to use in place of the default Computers
container for the account (if any)
the fully qualified LDAP/DNS domain name for the domain where the SAM-QFS file system is to be shared.
Log in to the host of the SAM-QFS file system that you want to share using SMB/CIFS. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfssmb
.
[qfssmb]root@solaris:~#
Open-source Samba and SMB servers cannot be used together on a single Oracle Solaris system. So see if Samba service is running. Pipe the output of the svcs
services status command into grep
and the regular expression samba
.
In the example, the output of the svcs
command contains a match for the regular expression, so the SMB service is running:
[qfssmb]root@solaris:~# svcs | grep samba
legacy_run Nov_03 lrc:/etc/rc3_d/S90samba
If the Samba service (svc:/network/samba
) is running, disable it along with the Windows Internet Naming Service/WINS (svc:/network/wins
), if running. Use the svcadm disable
command.
[qfssmb]root@solaris:~#svcadm disable svc:/network/samba
[qfssmb]root@solaris:~#svcadm disable svc:/network/wins
Now use the svcadm enable -r smb/server
command to start the SMB server and any services on which it depends.
[qfssmb]root@solaris:~# svcadm enable -r smb/server
Make sure that the system clock on the SAM-QFS host is within five minutes of the system clock of the Microsoft Windows domain controller:
If the Windows domain controller uses Network Time Protocol (NTP) servers, configure the SAM-QFS host to use the same servers. Create an /etc/inet/ntpclient.conf
file on the SAM-QFS host and start the ntpd
daemon using the svcadm enable ntp
command (see the ntpd
man page and your Oracle Solaris administration documentation for full information).
Otherwise, synchronize the SAM-QFS host with the domain controller by running the ntpdate
domain-controller-name
command (see the ntpdate
man page for details) or manually set the system clock on the SAM-QFS host to the time displayed by the domain controller's system clock.
Join the Windows domain using the command smbadm join -u
username
-o
organizational-unit
domain-name
, where username
is the name of the user account specified by the Active Directory administrator, the optional organizational-unit
is the account container specified (if any), and domain-name
is the specified fully qualified LDAP or DNS domain name.
In the example, we join the Windows domain this.example.com
using the user account
[qfssmb]root@solaris:~# smbadm join -u admin -o smbsharing this.example.com
Contact the Windows network administrator and obtain the name of the Windows workgroup that the host of the SAM-QFS file system should join.
The default workgroup is named WORKGROUP
.
Log in to the host of the SAM-QFS file system. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfssmb
.
[qfssmb]root@solaris:~#
Open-source Samba and SMB servers cannot be used together on a single Oracle Solaris system. So see if Samba service is running. Pipe the output of the svcs
services status command into grep
and the regular expression samba
.
In the example, the output of the svcs
command contains a match for the regular expression, so the SMB service is running:
[qfssmb]root@solaris:~# svcs | grep samba
legacy_run Nov_03 lrc:/etc/rc3_d/S90samba
If the Samba service (svc:/network/samba
) is running, disable it along with the Windows Internet Naming Service/WINS (svc:/network/wins
) services, if running. Use the svcadm disable
command.
Samba and SMB servers cannot be used together on a single Oracle Solaris system.
[qfssmb]root@solaris:~#svcadm disable svc:/network/samba
[qfssmb]root@solaris:~#svcadm disable svc:/network/wins
Now use the svcadm enable -r smb/server
command to start the SMB server and any services on which it depends.
[qfssmb]root@solaris:~# svcadm enable -r smb/server
Join the workgroup using the smbadm join
with the -w
(workgroup) switch and the name of the workgroup specified by the Windows network administrator.
In the example, the specified workgroup is named crossplatform
.
[qfssmb]root@solaris:~# smbadm join -w crossplatform
Configure the SAM-QFS host for encryption of SMB passwords. Open the /etc/pam.d/other
file in a text editor, add the command line password required pam_smb_passwd.so.1 nowarn
, and save the file.
In the example, we use the vi
editor:
[qfssmb]root@solaris:~#vi
/etc/pam.d/other
# Copyright (c) 2012, Oracle and/or its affiliates. All rights reserved. # # PAM configuration # # Default definitions for Authentication management # Used when service name is not explicitly mentioned for authentication # auth definitive pam_user_policy.so.1 ... password required pam_authtok_store.so.1password required pam_smb_passwd.so.1 nowarn
:wq
[qfssmb]root@solaris:~#
See the pam_smb_passwd
man page for further details.
Once the pam_smb_passwd
module has been installed, use the passwd
local-username
command to generate an encrypted version of the password for user local-username
so that the SMB server can log in to the Windows workgroup.
The SMB server cannot authenticate users using the same encrypted versions of passwords that the Solaris operating system uses. In the example, we generate an encrypted SMB password for the user smbsamqfs
:
[qfssmb]root@solaris:~# passwd smbsamqfs
Share the SAM-QFS file system using the procedures described in the administration documentation for your version of the Oracle Solaris operating system. The steps below summarize the procedure for Solaris 11.1:
Log in to the host of the SAM-QFS file system that you want to share using SMB/CIFS. Log in as root
.
If the file system is a SAM-QFS shared file system, log in to the metadata server for the file system. In the examples below, the server name is qfssmb
.
[qfssmb]root@solaris:~#
Configure the share. Use the command share -F smb
-o
specific-options
sharepath
sharename
, where the -F
switch specifies the smb
sharing protocol, sharepath
is the path to the shared resource, and sharename
is the name that you want to use for the share. The value of the optional -o
parameter, sharing-options
, can include any of the following:
abe=
[
true
|
false
]
When the access-based enumeration (ABE) policy for a share is true
, directory entries to which the requesting user has no access are omitted from directory listings returned to the client.
ad-container=
cn=
user
,ou=
organization
,dc=
domain-dns
The Active Directory container limits the share access to domain objects specified by the specified Lightweight Directory Access Protocol (LDAP) relative distinguished name (RDN) attribute values: cn
(user object class), ou
(organizational unit object class), and dc
(domain DNS object class).
For full information on using Active Directory containers with SMB/CIFS, consult Internet Engineering Task Force Request For Comment (RFC) 2253 and your Microsoft Windows directory services documentation.
catia=
[true
|false
]
When CATIA character substitution is true
, any characters in a CATIA version-4 file name that are illegal in Windows are replaced by legal equivalents. See the share_smb
man page for a list of substitutions.
csc=
[manual
|auto
|vdo
|disabled
]
A client-side caching (csc
) policy controls client-side caching of files for offline use. The manual
policy lets clients cache files when requested by users, but disables automatic, file-by-file reintegration (this is the default). The auto
policy lets clients automatically cache files and enables automatic file-by-file reintegration. The vdo
policy lets clients automatically cache files for offline use, enables file-by-file reintegration, and lets clients work from the local cache even while offline. The disabled
policy does not allow client-side caching.
dfsroot=
[
true
|
false
]
In a Microsoft Distributed File System (DFS), a root share (dfsroot=true
) is the share that organizes a group of widely distributed shared folders into a single DFS file system that can be more easily managed. For full information, see your Microsoft Windows Server documentation.
guestok=
[true
|false
]
When the guestok
policy is true
, the locally defined guest
account can access the share. When it is false
or left undefined (the default), the guest
account cannot access the share. This policy lets you map the Windows Guest
user to a locally defined, UNIX user name, such as guest
or nobody
:
# idmap add winname:Guest unixuser:guest
The locally defined account can then be authenticated against a password stored in /var/smb/smbpasswd
, if desired. See the idmap
man page for more information.
rw=
[*
|[-
][hostname
|netgroup
|domainname.suffix
|@ipaddress
|@netname
][:
...]]
The rw
policy grants or denies read-write access to any client that matches the supplied access list.
Access lists contain either a single asterisk (*
) meaning all or a colon-delimited list of criteria for deciding whether clients can or cannot access the share. The criteria can include specified hostnames, network groups, full LDAP/DNS domain names, and/or the symbol @
plus all or part of an IP address or domain name. A minus sign (-
) preceding an entry denies access to that list item. Access lists are evaluated left to right until the client satisfies one of the criteria. See the share_smb
man page for further details.
ro=
[*
|[-
][hostname
|netgroup
|domainname.suffix
|@ipaddress
|@netname
][:
...]]
The ro
policy grants or denies read-only access to any client that matches the access list.
none=
[*
|[-
][hostname
|netgroup
|domainname.suffix
|@ipaddress
|@netname
][:
...]]
The none
policy denies access to any client that matches the access list. If the access list is an asterisk (*
), the ro
and rw
policies can override the none
policy.
In the example, we share the /qfsms
file system read/write with clients smbclient1
and smbclient2
and read-only with smbclient3
:
[qfssmb]root@solaris:~#share -F smb -o rw=
smbclient1:smbclient2
\ro=smbclient3 /qfsms
When you enter the command, the system automatically restarts the SMB server daemon, smbd
.
Check the sharing parameters. Use the command share -F nfs
.
In the example, the command output shows that we have correctly configured the share:
[qfssmb]root@solaris:~# share -F smb
/qfsms sec=sys,rw=smbclient1:smbclient2,ro=smbclient3
If you plan on using the sideband database feature, go to "Configuring the SAM-QFS Reporting Database".
Otherwise, go to "Configuring Notifications and Logging".