System Administration Guide, Volume 3

Chapter 31 Accessing Remote File Systems Reference

This chapter provides an introduction to the NFS commands. This chapter also provides information about all of the pieces of the NFS environment and how these pieces work together.

NFS Files

You need several files to support NFS activities on any computer. Many of these files are ASCII, but some of them are data files. Table 31-1 lists these files and their functions.

Table 31-1 NFS Files

File Name 

Function 

/etc/default/fs

Lists the default file system type for local file systems. 

/etc/dfs/dfstab

Lists the local resources to be shared. 

/etc/dfs/fstypes

Lists the default file-system types for remote file systems. 

/etc/default/nfslogd

Lists configuration information for the NFS server logging daemon, nfslogd.

/etc/dfs/sharetab

Lists the resources (local and remote) that are shared (see the sharetab(4) man page); do not edit this file.

/etc/mnttab

Lists file systems that are currently mounted, including automounted directories (see the mnttab(4) man page); do not edit this file.

/etc/netconfig

Lists the transport protocols; do not edit this file. 

/etc/nfs/nfslog.conf

Lists general configuration information for NFS server logging. 

/etc/nfs/nfslogtab

Lists information for log post-processing by nfslogd; do not edit this file.

/etc/nfssec.conf

Lists NFS security services; do not edit this file. 

/etc/rmtab

Lists file systems remotely mounted by NFS clients (see the rmtab(4) man page); do not edit this file.

/etc/vfstab

Defines file systems to be mounted locally (see the vfstab(4) man page).

The first entry in /etc/dfs/fstypes is often used as the default file-system type for remote file systems. This entry defines the NFS file-system type as the default.

Only one entry is in /etc/default/fs: the default file-system type for local disks. You can determine the file-system types that are supported on a client or server by checking the files in /kernel/fs.

/etc/default/nfslogd

This file defines some of the parameters used when using NFS server logging. The following parameters can be defined.

CYCLE_FREQUENCY

Determines the number of hours that must pass before the log files are cycled. The default value is 24 hours. This option is used to prevent the log files from growing too large.

IDLE_TIME

Sets the number of seconds nfslogd should sleep before checking for more information in the buffer file. It also determines how often the configuration file is checked. This parameter, along with MIN_PROCESSING_SIZE, determines how often the buffer file is processed. The default value is 300 seconds. Increasing this number can improve preformance by reducing the number of checks.

MAPPING_UPDATE_INTERVAL

Specifies the number of seconds between updates of the records in the file-handle-to-path mapping tables. The default value is 86400 seconds or one day. This parameter helps keep the file-handle-to-path mapping tables up-to-date without having to continually update the tables.

MAX_LOGS_PRESERVE

Determines the number of log files to be saved. The default value is 10.

MIN_PROCESSING_SIZE

Sets the minimum number of bytes that the buffer file must reach before processing and writing to the log file. This parameter, along with IDLE_TIME, determines how often the buffer file is processed. The default value for is 524288 bytes. Increasing this number can improve preformance by reducing the number of times the buffer file is processed.

PRUNE_TIMEOUT

Selects the number of hours that must pass before a file-handle-to-path mapping record times out and can be pruned. The default value is 168 hours or 7 days.

UMASK

Specifies the permissions for the log files that are created by nfslogd. The default value is 0137.

/etc/nfs/nfslog.conf

This file defines the path, file names, and type of logging to be used by nfslogd. Each definition is associated with a tag. Starting NFS server logging requires that you identify the tag for each file system. The global tag defines the default values. The following parameters can be used with each tag as needed.

defaultdir=path

Specifies the default directory path for the logging files.

log=path/filename

Sets the path and file name for the log files.

fhtable=path/filename

Selects the path and file name for the file-handle-to-path database files.

buffer=path/filename

Determines the path and file name for the buffer files.

logformat=basic|extended

Selects the format to be used when creating user-readable log files. The basic format produces a log file similar to some ftpd daemons. The extended format gives a more detailed view.

For the parameters that can specify both the path and the file name, if the path is not specified, the path defined by defaultdir is used. Also, you can override defaultdir by using an absolute path.

To make identifying the files easier, place the files in separate directories. Here is an example of the changes needed.


% cat /etc/nfs/nfslog.conf
#ident  "@(#)nfslog.conf        1.5     99/02/21 SMI"
#
  .
  .
# NFS server log configuration file.
#

global  defaultdir=/var/nfs \
        log=nfslog fhtable=fhtable buffer=nfslog_workbuffer

publicftp log=logs/nfslog fhtable=fh/fhtables buffer=buffers/workbuffer

In this example, any file system shared with log=publicftp would use the following values: the default directory would be /var/nfs, log files would be stored in /var/nfs/logs/nfslog*, file-handle-to-path database tables would be stored in /var/nfs/fh/fhtables, and buffer files would be stored in /var/nfs/buffers/workbuffer.

NFS Daemons

To support NFS activities, several daemons are started when a system goes into run level 3 or multiuser mode. Two of these daemons (mountd and nfsd) are run on systems that are NFS servers. The automatic startup of the server daemons depends on the existence of entries labeled with the NFS file-system type in /etc/dfs/sharetab.

The other two daemons (lockd and statd) are run on NFS clients to support NFS file locking. These daemons must also run on the NFS servers.

automountd

This daemon handles the mount and unmount requests from the autofs service. The syntax of the command is:

automountd [ -Tnv ] [ -D name=value ]

where -T selects to display each RPC call to standard output, -n disables browsing on all autofs nodes, -v selects to log all status messages to the console, and -D name=value substitutes value for the automount map variable indicated by name. The default value for the automount map is /etc/auto_master. Use the -T option for troubleshooting.

lockd

This daemon supports record-locking operations on NFS files. It sends locking requests from the client to the NFS server. On the NFS server, it starts local locking. The daemon is normally started without any options. You can use three options with this command (see the lockd(1M) man page).

The -g graceperiod option selects the number of seconds that the clients have to reclaim locks after a server reboot. During this time, the NFS server only processes reclaims of old locks. All other requests for service must wait until the grace period is over. This option affects the NFS server-side response, so it can be changed only on an NFS server. The default value for graceperiod is 45 seconds. Reducing this value means that NFS clients can resume operation more quickly after a server reboot, but a reduction increases the chances that a client might not be able to recover all its locks.

The -t timeout option selects the number of seconds to wait before retransmitting a lock request to the remote server. This option affects the NFS client-side service. The default value for timeout is 15 seconds. Decreasing the timeout value can improve response time for NFS clients on a noisy network, but it can cause additional server load by increasing the frequency of lock requests.

The nthreads option specifies the maximum number of concurrent threads that the server handles per connection. Base the value for nthreads on the load expected on the NFS server. The default value is 20. Because each NFS client using TCP uses a single connection with the NFS server, each TCP client is granted the ability to use up to 20 concurrent threads on the server. All NFS clients using UDP share a single connection with the NFS server. Under these conditions it might be necessary to increase the number of threads available for the UDP connection. A minimum calculation would be to allow two threads for each UDP client, but this is specific to the workload on the client, so two threads per client might not be sufficient. The disadvantage to using more threads is that when the threads are used, more memory is used on the NFS server, but if the threads are never used, increasing nthreads will have no effect.

mountd

This is a remote procedure call (RPC) server that handles file-system mount requests from remote systems and provides access control. It checks /etc/dfs/sharetab to determine which file systems are available for remote mounting and which systems are allowed to do the remote mounting. You can use two options with this command (see the mountd(1M) man page): -v and -r.

The -v option runs the command in verbose mode. Each time an NFS server determines the access a client should get, a message is printed on the console. The information generated can be useful when trying to determine why a client cannot access a file system.

The -r option rejects all future mount requests from clients. This does not affect clients that already have a file system mounted.

nfsd

This daemon handles other client file-system requests. You can use several options with this command. See the nfsd(1M) man page for a complete listing.

The -l option sets the connection queue length for the NFS/TCP over connection-oriented transports. The default value is 32 entries.

The -c #_conn option selects the maximum number of connections per connection-oriented transport. The default value for #_conn is unlimited.

The nservers option is the maximum number of concurrent requests that a server can handle. The default value for nservers is 1, but the startup scripts select 16.

Unlike older versions of this daemon, nfsd does not spawn multiple copies to handle concurrent requests. Checking the process table with ps only shows one copy of the daemon running.

nfslogd

This daemon provides operational logging. NFS operations against a server are logged based on the configuration options defined in /etc/default/nfslogd. When NFS server logging is enabled, records of all RPC operations on a selected file system are written into a buffer file by the kernel. Then nfslogd post-processes these requests. The name service switch is used to help map UIDs to logins and IP addresses to host names. The number is recorded if no match can be found through the identified name services.

Mapping of file handles to path names is also handled by nfslogd. The daemon keeps track of these mappings in a file-handle-to-path mapping table. One mapping table exists for each tag identified in /etc/nfs/nfslogd. After post-processing, the records are written out to ASCII log files.

statd

This daemon works with lockd to provide crash and recovery functions for the lock manager. It tracks the clients that hold locks on an NFS server. If a server crashes, on rebooting statd on the server contacts statd on the client. The client statd can then attempt to reclaim any locks on the server. The client statd also informs the server statd when a client has crashed, so that the client's locks on the server can be cleared. There are no options to select with this daemon. For more information see the statd(1M) man page.

In the Solaris 7 release, the way that statd keeps track of the clients has been improved. In all earlier Solaris releases, statd created files in /var/statmon/sm for each client using the client's unqualified host name. This caused problems if you had two clients in different domains that shared a hos tname, or if there were clients that were not resident in the same domain as the NFS server. Because the unqualified host name only lists the host name, without any domain or IP-address information, the older version of statd had no way to differentiate between these types of clients. To fix this problem, the Solaris 7 statd creates a symbolic link in /var/statmon/sm to the unqualified host name using the IP address of the client. The new link will look like:


# ls -l /var/statmon/sm
lrwxrwxrwx   1 root          11 Apr 29 16:32 ipv4.192.9.200.1 -> myhost
--w-------   1 root          11 Apr 29 16:32 myhost

In this example, the client host name is myhost and the clients' IP address is 192.9.200.1. If another host with the name myhost were mounting a file system, there would be two symbolic links to the host name.

NFS Commands

These commands must be run as root to be fully effective, but requests for information can be made by all users:

automount

This command installs autofs mount points and associates the information in the automaster files with each mount point. The syntax of the command is:

automount [ -t duration ] [ -v ]

where -t duration sets the time, in seconds, that a file system is to remain mounted, and -v selects the verbose mode. Running this command in the verbose mode allows for easier troubleshooting.

If not specifically set, the value for duration is set to 5 minutes. In most circumstances this is a good value; however, on systems that have many automounted file systems, you might need to increase the duration value. In particular, if a server has many users active, checking the automounted file systems every 5 minutes can be inefficient. Checking the autofs file systems every 1800 seconds (or 30 minutes) could be more optimal. By not unmounting the file systems every 5 minutes, it is possible that /etc/mnttab, which is checked by df, can become large. The output from df can be filtered by using the -F option (see the df(1M) man page) or by using egrep to help fix this problem.

Another factor to consider is that adjusting the duration also changes how quickly changes to the automounter maps will be reflected. Changes will not be seen until the file system is unmounted. Refer to "Modifying the Maps" for instructions on how to modify automounter maps.

clear_locks

This command enables you to remove all file, record, and share locks for an NFS client. You must be root to run this command. From an NFS server you can clear the locks for a specific client and from an NFS client you can clear locks for that client on a specific server. The following example would clear the locks for the NFS client named tulip on the current system.


# clear_locks tulip

Using the -s option enables you to specify which NFS host to clear the locks from. It must be run from the NFS client, which created the locks. In this case, the locks from the client would be removed from the NFS server named bee.


# clear_locks -s bee

Caution - Caution -

This command should only be run when a client crashes and cannot clear its locks. To avoid data corruption problems, do not clear locks for an active client.


mount

With this command, you can attach a named file system, either local or remote, to a specified mount point. For more information, see the mount(1M) man page. Used without arguments, mount displays a list of file systems that are currently mounted on your computer.

Many types of file systems are included in the standard Solaris installation. Each file-system type has a specific man page that lists the options to mount that are appropriate for that file-system type. The man page for NFS file systems is mount_nfs(1M); for UFS file systems it is mount_ufs(1M); and so forth.

The Solaris 7 release includes the ability to select a path name to mount from an NFS server using an NFS URL instead of the standard server:/pathname syntax. See "How to Mount an NFS File System Using an NFS URL" for further information.


Caution - Caution -

The version of the mount command included in any Solaris release from 2.6 to the current release, will not warn about options that are not valid. The command silently ignores any options that cannot be interpreted. Make sure to verify all of the options that were used to prevent unexpected behavior.


mount Options for NFS File Systems

The subsequent text lists some of the options that can follow the -o flag when mounting an NFS file system.

bg|fg

These options can be used to select the retry behavior if a mount fails. The -bg option causes the mount attempts to be run in the background. The -fg option causes the mount attempt to be run in the foreground. The default is -fg, which is the best selection for file systems that must be available. It prevents further processing until the mount is complete. -bg is a good selection for file systems that are not critical, because the client can do other processing while waiting for the mount request to complete.

forcedirectio

This option improves performance of sequential reads on large files. Data is copied directly to a user buffer and no caching is done in the kernel on the client. This option is off by default.

largefiles

This option makes it possible to access files larger than 2 Gbytes on a server running the Solaris 2.6 release. Whether a large file can be accessed can only be controlled on the server, so this option is silently ignored on NFS version 3 mounts. Starting with release 2.6, by default, all UFS file systems are mounted with -largefiles. For mounts using the NFS version 2 protocol, the -largefiles option causes the mount to fail with an error.

nolargefiles

This option for UFS mounts guarantees that there are and will be no large files on the file system (see the mount_ufs(1M) man page). Because the existence of large files can only be controlled on the NFS server, there is no option for -nolargefiles using NFS mounts. Attempts to NFS mount a file system using this option are rejected with an error.

public

This option forces the use of the public file handle when contacting the NFS server. If the public file handle is supported by the server, the mounting operation is faster because the MOUNT protocol is not used. Also, because the MOUNT protocol is not used, the public option allows mounting to occur through a firewall.

rw|ro

The -rw and -ro options indicate whether a file system is to be mounted read-write or read-only. The default is read-write, which is the appropriate option for remote home directories, mail-spooling directories, or other file systems that need to be changed by users. The read-only option is appropriate for directories that should not be changed by users; for example, shared copies of the man pages should not be writable by users.

sec=mode

You can use this option to specify the authentication mechanism to be used during the mount transaction. The value for mode can be one of the values shown in Table 31-2. The modes are also defined in /etc/nfssec.conf.

Table 31-2 NFS Security Modes

Mode 

Authentication Service Selected 

krb5

Kerberos Version 5 

none

No authentication 

dh

Diffie-Hellman (DH) authentication 

sys

Standard UNIX authentication 

soft|hard

An NFS file system mounted with the soft option returns an error if the server does not respond. The hard option causes the mount to continue to retry until the server responds. The default is hard, which should be used for most file systems. Applications frequently do not check return values from soft-mounted file systems, which can make the application fail or can lead to corrupted files. Even if the application does check, routing problems and other conditions can still confuse the application or lead to file corruption if the soft option is used. In most cases the soft option should not be used. If a file system is mounted using the hard option and becomes unavailable, an application using this file system will hang until the file system becomes available.

Using the mount Command

Both of these commands mount an NFS file system from the server bee read-only:


# mount -F nfs -r bee:/export/share/man /usr/man

# mount -F nfs -o ro bee:/export/share/man /usr/man

This command uses the -O option to force the man pages from the server bee to be mounted on the local system even if /usr/man has already been mounted on:


# mount -F nfs -O bee:/export/share/man /usr/man

This command uses client failover:


# mount -F nfs -r bee,wasp:/export/share/man /usr/man

Note -

When used from the command line, the listed servers must support the same version of the NFS protocol. Do not mix version 2 and version 3 servers when running mount from the command line. You can use mixed servers with autofs, in which case the best subset of version 2 or version 3 servers is used.


Here is an example of using an NFS URL with the mount command:


# mount -F nfs nfs://bee//export/share/man /usr/man

Use the mount command with no arguments to display file systems mounted on a client.


% mount
/ on /dev/dsk/c0t3d0s0 read/write/setuid on Tues Jan 24 13:20:47 1995
/usr on /dev/dsk/c0t3d0s6 read/write/setuid on Tues Jan 24 13:20:47 1995
/proc on /proc read/write/setuid on Tues Jan 24 13:20:47 1995
/dev/fd on fd read/write/setuid on Tues Jan 24 13:20:47 1995
/tmp on swap read/write on Tues Jan 24 13:20:51 1995
/opt on /dev/dsk/c0t3d0s5 setuid/read/write on Tues Jan 24 13:20:51 1995
/home/kathys on bee:/export/home/bee7/kathys              
  intr/noquota/nosuid/remote on Tues Jan 24 13:22:13 1995

umount

This command enables you to remove a remote file system that is currently mounted. The umount command supports the -V option to allow for testing. You might also use the -a option to umount several file systems at one time. If mount_points are included with the -a option, those file systems are unmounted. If no mount points are included, an attempt is made to unmount all file systems listed in /etc/mnttab, except for the "required" file systems, such as /, /usr, /var, /proc, /dev/fd, and /tmp.

Because the file system is already mounted and should have an entry in /etc/mnttab, you do not need to include a flag for the file-system type.

The command cannot succeed if the file system is in use. For instance, if a user has used cd to get access to a file system, the file system is busy until the working directory is changed. The umount command can hang temporarily if the NFS server is unreachable.

Using the umount Command

This example unmounts a file system mounted on /usr/man:


# umount /usr/man

This example displays the results of running umount -a -V:


# umount -a -V
umount /home/kathys
umount /opt
umount /home
umount /net

Notice that this command does not actually unmount the file systems.

mountall

Use this command to mount all file systems or a specific group of file systems listed in a file-system table. The command provides a way to select the file-system type to be accessed with the -F FSType option, to select all the remote file systems listed in a file-system table with the -r option, and to select all the local file systems with the -l option. Because all file systems labeled as NFS file-system type are remote file systems, some of these options are redundant. For more information, see the mountall(1M) man page.

Using the mountall Command

These two examples are equivalent:


# mountall -F nfs

# mountall -F nfs -r

umountall

Use this command to unmount a group of file systems. The -k option runs the fuser -k mount_point command to kill any processes associated with the mount_point. The -s option indicates that unmount is not to be performed in parallel. -l specifies that only local file systems are to be used, and -r specifies that only remote file systems are to be used. The -h host option indicates that all file systems from the named host should be unmounted. You cannot combine the -h option with -l or -r.

Using the umountall Command

This command unmounts all file systems that are mounted from remote hosts:


# umountall -r

This command unmounts all file systems currently mounted from the server bee:


# umountall -h bee

share

With this command, you can make a local file system on an NFS server available for mounting. You can also use the share command to display a list of the file systems on your system that are currently shared. The NFS server must be running for the share command to work. The NFS server software is started automatically during boot if there is an entry in /etc/dfs/dfstab. The command does not report an error if the NFS server software is not running, so you must check this yourself.

The objects that can be shared include any directory tree, but each file system hierarchy is limited by the disk slice or partition that the file system is located on. For instance, sharing the root (/) file system would not also share /usr, unless they are on the same disk partition or slice. Normal installation places root on slice 0 and /usr on slice 6. Also, sharing /usr would not share any other local disk partitions that are mounted on subdirectories of /usr.

A file system cannot be shared that is part of a larger file system already being shared. For example, if /usr and /usr/local are on one disk slice, /usr can be shared or /usr/local can be shared, but if both need to be shared with different share options, /usr/local must to be moved to a separate disk slice.


Note -

You can gain access to a file system that is shared read-only through the file handle of a file system that is shared read-write if the two file systems are on the same disk slice. It is more secure to place those file systems that need to be read-write on a separate partition or disk slice than the file systems that you need to share read-only.


Non-file System Specific share Options

Some of the options that you can include with the -o flag are as follows.

rw|ro

The pathname file system is shared read-write or read-only to all clients.

rw=accesslist

The file system is shared read-write to the listed clients only. All other requests are denied. Starting with the Solaris 2.6 release, the list of clients defined in accesslist has been expanded. See "Setting Access Lists With the share Command" for more information. You can use this option to override an -ro option.

NFS Specific share Options

The options that you can use with NFS file systems include the following.

aclok

This option enables an NFS server supporting the NFS version 2 protocol to be configured to do access control for NFS version 2 clients. Without this option all clients are given minimal access. With this option the clients have maximal access. For instance, on file systems shared with the -aclok option, if anyone has read permissions, everyone does. However, without this option, you can deny access to a client who should have access permissions. Whether it is preferred to permit too much access or too little, depends on the security systems already in place. See "Securing Files (Tasks)" in System Administration Guide, Volume 2 for more information about access control lists (ACLs).


Note -

To take advantage of ACLs, it is best to have clients and servers run software that supports the NFS version 3 and NFS_ACL protocols. If the software only supports the NFS version 3 protocol, clients get correct access, but cannot manipulate the ACLs. If the software supports the NFS_ACL protocol, the clients get correct access and can manipulate the ACLs. Starting with release 2.5, the Solaris system supports both protocols.


anon=uid

You use uid to select the user ID of unauthenticated users. If you set uid to -1, the server denies access to unauthenticated users. You can grant root access by setting anon=0, but this will allow unauthenticated users to have root access, so use the root option instead.

index=filename

You can use the -index=filename option to force the loading of a HyperText Markup Language (HTML) file instead of displaying a listing of the directory when a user accesses an NFS URL. This option mimics the action of current browsers if an index.html file is found in the directory that the HTTP URL is accessing. This is the equivalent of setting the DirectoryIndex option for httpd. For instance, if the dfstab file entry looks like:


share -F nfs -o ro,public,index=index.html /export/web

these URLs will display the same information:


nfs://<server>/<dir>
nfs://<server>/<dir>/index.html
nfs://<server>//export/web/<dir>
nfs://<server>//export/web/<dir>/index.html
http://<server>/<dir>
http://<server>/<dir>/index.html
log=tag

This option specifies the tag in /etc/nfs/nfslog.conf that contains the NFS server logging configuration information for a file system. This option must be selected to enable NFS server logging.

nosuid

This option signals that all attempts to enable the setuid or setgid mode should be ignored. NFS clients cannot be able to create files with the setuid or setgid bits on.

public

The -public option has been added to the share command to enable WebNFS browsing. Only one file system on a server can be shared with this option.

root=accesslist

The server gives root access to the hosts in the list. By default, the server does not give root access to any remote hosts. If the selected security mode is anything other than -sec=sys, you can only include client host names in the accesslist. Starting with the Solaris 2.6 release, the list of clients defined in accesslist is expanded. See "Setting Access Lists With the share Command" for more information.


Caution - Caution -

Granting root access to other hosts has far-reaching security implications; use the -root= option with extreme caution.


sec=mode[:mode]

mode selects the security modes that are needed to get access to the file system. By default, the security mode is UNIX authentication. You can specify multiple modes, but use each security mode only once per command line. Each -mode option applies to any subsequent -rw, -ro, -rw=, -ro=, -root=, and -window= options, until another -mode is encountered. Using -sec=none maps all users to user nobody.

window=value

value selects the maximum life time in seconds of a credential on the NFS server. The default value is 30000 seconds or 8.3 hours.

Setting Access Lists With the share Command

In Solaris releases prior to 2.6, the accesslist included with either the -ro=, -rw=, or -root= option of the share command were restricted to a list of host names or netgroup names. Starting with the Solaris 2.6 release, the access list can also include a domain name, a subnet number, or an entry to deny access. These extensions should make it easier to control file access control on a single server, without having to change the name space or maintain long lists of clients.

This command provides read-only access for most systems but allows read-write access for rose and lilac:


# share -F nfs -o ro,rw=rose:lilac /usr/src

In the next example, read-only access is assigned to any host in the eng netgroup. The client rose is specifically given read-write access.


# share -F nfs -o ro=eng,rw=rose /usr/src

Note -

You cannot specify both rw and ro without arguments. If no read-write option is specified, the default is read-write for all clients.


To share one file system with multiple clients, you must enter all options on the same line, because multiple invocations of the share command on the same object "remember" only the last command run. This command enables read-write access to three client systems, but only rose and tulip are given access to the file system as root.


# share -F nfs -o rw=rose:lilac:tulip,root=rose:tulip /usr/src

When sharing a file system using multiple authentication mechanisms, make sure to include the -ro, -ro=, -rw, -rw=, -root, and -window options after the correct security modes. In this example, UNIX authentication is selected for all hosts in the netgroup named eng. These hosts can only mount the file system in read-only mode. The hosts tulip and lilac can mount the file system read-write if they use Diffie-Hellman authentication. With these options, tulip and lilaccan mount the file system read-only even if they are not using DH authentication, if the host names are listed in the eng netgroup.


# share -F nfs -o sec=dh,rw=tulip:lilac,sec=sys,ro=eng /usr/src

Even though UNIX authentication is the default security mode, it is not included if the -sec option is used, so it is important to include a -sec=sys option if UNIX authentication is to be used with any other authentication mechanism.

You can use a DNS domain name in the access list by preceding the actual domain name with a dot. The dot indicates that the string following it is a domain name, not a fully qualified host name. The following entry allows mount access to all hosts in the eng.sun.com domain:


# share -F nfs -o ro=.:.eng.sun.com /export/share/man

In this example, the single "." matches all hosts that are matched through the NIS or NIS+ name spaces. The results returned from these name services do not include the domain name. The ".eng.sun.com" entry matches all hosts that use DNS for name space resolution. DNS always returns a fully qualified host name, so the longer entry is required if you use a combination of DNS and the other name spaces.

You can use a subnet number in an access list by preceding the actual network number or the network name with "@". This differentiates the network name from a netgroup or a fully qualified host name. You must identify the subnet in either /etc/networks or in a NIS or NIS+ name space. The following entries have the same effect if the 129.144 subnet has been identified as the eng network:


# share -F nfs -o ro=@eng /export/share/man
# share -F nfs -o ro=@129.144 /export/share/man
# share -F nfs -o ro=@129.144.0.0 /export/share/man

The last two entries show you do not need to include the full network address.

If the network prefix is not byte aligned, as with Classless Inter-Domain Routing (CIDR), the mask length can be explicitly specified on the command line. The mask length is defined by following either the network name or the network number with a slash and the number of significant bits in the prefix of the address. For example:


# share -f nfs -o ro=@eng/17 /export/share/man
# share -F nfs -o ro=@129.144.132/17 /export/share/man

In these examples, the "/17" indicates that the first 17 bits in the address are to be used as the mask. For additional information on CIDR, look up RFC 1519.

You can also select negative access by placing a "-" before the entry. Because the entries are read from left to right, you must place the negative access entries before the entry they apply to:


# share -F nfs -o ro=-rose:.eng.sun.com /export/share/man

This example would allow access to any hosts in the eng.sun.com domain except the host named rose.

unshare

This command allows you to make a previously available file system unavailable for mounting by clients. You can use the unshare command to unshare any file system--whether the file system was shared explicitly with the share command or automatically through /etc/dfs/dfstab. If you use the unshare command to unshare a file system that you shared through the dfstab file, remember that it will be shared again when you exit and reenter run level 3. You must remove the entry for this file system from the dfstab file if the change is to continue.

When you unshare an NFS file system, access from clients with existing mounts is inhibited. The file system might still be mounted on the client, but the files will not be accessible.

Using the unshare Command

This command unshares a specific file system:


# unshare /usr/src

shareall

This command allows for multiple file systems to be shared. When used with no options, the command shares all entries in /etc/dfs/dfstab. You can include a file name to specify the name of a file that lists share command lines. If you do not include a file name, /etc/dfs/dfstab is checked. If you use a "-" to replace the file name, you can type share commands from standard input.

Using the shareall Command

This command shares all file systems listed in a local file:


# shareall /etc/dfs/special_dfstab

unshareall

This command makes all currently shared resources unavailable. The -F FSType option selects a list of file-system types defined in /etc/dfs/fstypes. This flag enables you to choose only certain types of file systems to be unshared. The default file system type is defined in /etc/dfs/fstypes. To choose specific file systems, use the unshare command.

Using the unshareall Command

This example should unshare all NFS type file systems:


# unshareall -F nfs

showmount

This command displays all clients that have remotely mounted file systems that are shared from an NFS server, or only the file systems that are mounted by clients, or the shared file systems with the client access information. The command syntax is:

showmount [ -ade ] [ hostname ]

where -a prints a list of all the remote mounts (each entry includes the client name and the directory), -d prints a list of the directories that are remotely mounted by clients, -e prints a list of the files shared (or exported), and hostname selects the NFS server to gather the information from. If hostname is not specified the local host is queried.

Using the showmount Command

This command lists all clients and the local directories that they have mounted.


# showmount -a bee
lilac:/export/share/man
lilac:/usr/src
rose:/usr/src
tulip:/export/share/man

This command lists the directories that have been mounted.


# showmount -d bee
/export/share/man
/usr/src

This command lists file systems that have been shared.


# showmount -e bee
/usr/src								(everyone)
/export/share/man					eng

setmnt

This command creates an /etc/mnttab table. The mount and umount commands consult the table. Generally, there is no reason to run this command manually; it runs automatically when a system is booted.

Other Useful Commands

These commands can be useful when troubleshooting NFS problems.

nfsstat

You can use this command to gather statistical information about NFS and RPC connections. The syntax of the command is:

nfsstat [ -cmnrsz ]

where -c displays client-side information, -m displays statistics for each NFS mounted file system, -n specifies that NFS information is to be displayed (both client and server side), -r displays RPC statistics, -s displays the server-side information, and -z specifies that the statistics should be set to zero. If no options are supplied on the command line, the -cnrs options are used.

Gathering server-side statistics can be important for debugging problems when new software or hardware is added to the computing environment. Running this command at least once a week, and storing the numbers, provides a good history of previous performance.

Using the nfsstat Command


# nfsstat -s

Server rpc:
Connection oriented:
calls      badcalls   nullrecv   badlen      xdrcall    dupchecks  dupreqs
11420263   0          0          0           0          1428274    19
Connectionless:
calls      badcalls   nullrecv   badlen      xdrcall    dupchecks  dupreqs
14569706   0          0          0           0          953332     1601

Server nfs:
calls      badcalls
24234967   226
Version 2: (13073528 calls)
null       getattr    setattr    root        lookup     readlink   read
138612 1%  1192059 9% 45676 0%   0 0%        9300029 71% 9872 0%   1319897 10%
wrcache    write      create     remove      rename     link       symlink
0 0%       805444 6%  43417 0%   44951 0%    3831 0%    4758 0%    1490 0%
mkdir      rmdir      readdir    statfs
2235 0%    1518 0%    51897 0%   107842 0%
Version 3: (11114810 calls)
null       getattr    setattr    lookup      access     readlink   read
141059 1%  3911728 35% 181185 1% 3395029 30% 1097018 9% 4777 0%   960503 8%
write      create     mkdir      symlink     mknod      remove     rmdir
763996 6%  159257 1%  3997 0%    10532 0%    26 0%      164698 1%  2251 0%
rename     link       readdir    readdirplus fsstat     fsinfo     pathconf
53303 0%   9500 0%    62022 0%   79512 0%    3442 0%    34275 0%   3023 0%
commit    
73677 0%  

Server nfs_acl:
Version 2: (1579 calls)
null       getacl     setacl     getattr     access    
0 0%       3 0%       0 0%       1000 63%    576 36%   
Version 3: (45318 calls)
null       getacl     setacl    
0 0%       45318 100% 0 0%

This is an example of NFS server statistics. The first five lines deal with RPC and the remaining lines report NFS activities. In both sets of statistics, knowing the average number of badcalls or calls and the number of calls per week, can help identify when something is going wrong. The badcalls value reports the number of bad messages from a client and can point out network hardware problems.

Some of the connections generate write activity on the disks. A sudden increase in these statistics could indicate trouble and should be investigated. For NFS version 2 statistics, the connections to note are: setattr, write, create, remove, rename, link, symlink, mkdir, and rmdir. For NFS version 3 statistics, the value to watch is commit. If the commit level is high in one NFS server as compared to another almost identical one, check that the NFS clients have enough memory. The number of commit operations on the server grow when clients do not have available resources.

pstack

This command displays a stack trace for each process. It must be run by root. You can use it to determine where a process is hung. The only option allowed with this command is the PID of the process that you want to check (see the proc(1) man page).

The following example is checking the nfsd process that is running.


# /usr/proc/bin/pstack 243
243:    /usr/lib/nfs/nfsd -a 16
 ef675c04 poll     (24d50, 2, ffffffff)
 000115dc ???????? (24000, 132c4, 276d8, 1329c, 276d8, 0)
 00011390 main     (3, efffff14, 0, 0, ffffffff, 400) + 3c8
 00010fb0 _start   (0, 0, 0, 0, 0, 0) + 5c

It shows that the process is waiting for a new connection request. This is a normal response. If the stack shows that the process is still in poll after a request is made, the process might be hung. Follow the instructions in "How to Restart NFS Services" to fix this problem. Review the instructions in "NFS Troubleshooting Procedures" to fully verify that your problem is a hung program.

rpcinfo

This command generates information about the RPC service running on a system. You can also use it to change the RPC service. Many options are available with this command (see the rpcinfo(1M) man page). This is a shortened synopsis for some of the options that you can use with the command:

rpcinfo [ -m | -s ] [ hostname ]

rpcinfo [ -t | -u ] [ hostname ] [ progname ]

where -m displays a table of statistics of the rpcbind operations, -s displays a concise list of all registered RPC programs, -t displays the RPC programs that use TCP, -u displays the RPC programs that use UDP, hostname selects the host name of the server you need information from, and progname selects the RPC program to gather information about. If no value is given for hostname, the local host name is used. You can substitute the RPC program number for progname, but many users will remember the name and not the number. You can use the -p option in place of the -s option on those systems that do not run the NFS version 3 software.

The data generated by this command can include:

Using the rpcinfo Command

This example gathers information on the RPC services running on a server. The text generated by the command is filtered by the sort command to make it more readable. Several lines listing RPC services have been deleted from the example.


% rpcinfo -s bee |sort -n
   program version(s) netid(s)                         service     owner
    100000  2,3,4     udp,tcp,ticlts,ticotsord,ticots  portmapper  superuser
    100001  4,3,2     ticlts,udp                       rstatd      superuser
    100002  3,2       ticots,ticotsord,tcp,ticlts,udp  rusersd     superuser
    100003  3,2       tcp,udp                          nfs         superuser
    100005  3,2,1     ticots,ticotsord,tcp,ticlts,udp  mountd      superuser
    100008  1         ticlts,udp                       walld       superuser
    100011  1         ticlts,udp                       rquotad     superuser
    100012  1         ticlts,udp                       sprayd      superuser
    100021  4,3,2,1   ticots,ticotsord,ticlts,tcp,udp  nlockmgr    superuser
    100024  1         ticots,ticotsord,ticlts,tcp,udp  status      superuser
    100026  1         ticots,ticotsord,ticlts,tcp,udp  bootparam   superuser
    100029  2,1       ticots,ticotsord,ticlts          keyserv     superuser
    100068  4,3,2     tcp,udp                          cmsd        superuser
    100078  4         ticots,ticotsord,ticlts          kerbd       superuser
    100083  1         tcp,udp                          -           superuser
    100087  11        udp                              adm_agent   superuser
    100088  1         udp,tcp                          -           superuser
    100089  1         tcp                              -           superuser
    100099  1         ticots,ticotsord,ticlts          pld         superuser
    100101  10        tcp,udp                          event       superuser
    100104  10        udp                              sync        superuser
    100105  10        udp                              diskinfo    superuser
    100107  10        udp                              hostperf    superuser
    100109  10        udp                              activity    superuser
	.
	.
    100227  3,2       tcp,udp                          -           superuser
    100301  1         ticlts                           niscachemgr superuser
    390100  3         udp                              -           superuser
1342177279  1,2       tcp                              -           14072

This example shows how to gather information about a particular RPC service using a particular transport on a server.


% rpcinfo -t bee mountd
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
% rpcinfo -u bee nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting

The first example checks the mountd service running over TCP. The second example checks the NFS service running over UDP.

snoop

This command is often used to watch for packets on the network. It must be run as root. It is a good way to make sure that the network hardware is functioning on both the client and the server. Many options are available (see the snoop(1M) man page). A shortened synopsis of the command follows:

snoop [ -d device ] [ -o filename ] [ host hostname ]

where -d device specifies the local network interface, -o filename stores all the captured packets into the named file, and hostname indicates to display only packets going to and from a specific host.

The -d device option is useful on those servers that have multiple network interfaces. You can use many other expressions besides setting the host. A combination of command expressions with grep can often generate data that is specific enough to be useful.

When troubleshooting, make sure that packets are going to and from the proper host. Also, look for error messages. Saving the packets to a file can make it much easier to review the data.

truss

You can use this command to see if a process is hung. It must be run by root. You can use many options with this command (see the truss(1) man page). A shortened syntax of the command is:

truss [ -t syscall ] -p pid

where -t syscall selects system calls to trace, and -p pid indicates the PID of the process to be traced. The syscall may be a comma-separated list of system calls to be traced. Also, starting syscall with a ! selects to exclude the listed system calls from the trace.

This example shows that the process is waiting for another connection request from a new client.


# /usr/bin/truss -p 243
poll(0x00024D50, 2, -1)         (sleeping...)

This is a normal response. If the response does not change after a new connection request has been made, the process could be hung. Follow the instructions in "How to Restart NFS Services" to fix the hung program. Review the instructions in "NFS Troubleshooting Procedures" to fully verify that your problem is a hung program.

How It All Works Together

The following sections describe some of the complex functions of the NFS software.

Version 2 and Version 3 Negotiation

Because NFS servers might be supporting clients that are not using the NFS version 3 software, part of the initiation procedure includes negotiation of the protocol level. If both the client and the server can support version 3, that version will be used. If either the client or the server can only support version 2, that version will be selected.

You can override the values determined by the negotiation by using the -vers option to the mount command (see the mount_nfs(1M) man page). Under most circumstances, you should not have to specify the version level, as the best one is selected by default.

UDP and TCP Negotiation

During initiation, the transport protocol is also negotiated. By default, the first connection-oriented transport supported on both the client and the server is selected. If this does not succeed, the first available connectionless transport protocol is used. The transport protocols supported on a system are listed in /etc/netconfig. TCP is the connection-oriented transport protocol supported by the release. UDP is the connectionless transport protocol.

When both the NFS protocol version and the transport protocol are determined by negotiation, the NFS protocol version is given precedence over the transport protocol. The NFS version 3 protocol using UDP is given higher precedence than the NFS version 2 protocol using TCP. You can manually select both the NFS protocol version and the transport protocol with the mount command (see the mount_nfs(1M) man page). Under most conditions, allow the negotiation to select the best options.

File Transfer Size Negotiation

The file transfer size establishes the size of the buffers that are used when transferring data between the client and the server. In general, larger transfer sizes are better. The NFS version 3 protocol has an unlimited transfer size, but starting with the Solaris 2.6 release, the software bids a default buffer size of 32 Kbytes. The client can bid a smaller transfer size at mount time if needed, but under most conditions this is not necessary.

The transfer size is not negotiated with systems using the NFS version 2 protocol. Under this condition the maximum transfer size is set to 8 Kbytes.

You can use the -rsize and -wsize options to set the transfer size manually with the mount command. You might need to reduce the transfer size for some PC clients. Also, you can increase the transfer size if the NFS server is configured to use larger transfer sizes.

How File Systems Are Mounted

When a client needs to mount a file system from a server, it must obtain a file handle from the server that corresponds to the file system. This process requires that several transactions occur between the client and the server. In this example, the client is attempting to mount /home/terry from the server. A snoop trace for this transaction follows.


client -> server PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
server -> client PORTMAP R GETPORT port=33492
client -> server MOUNT3 C Null
server -> client MOUNT3 R Null 
client -> server MOUNT3 C Mount /export/home9/terry
server -> client MOUNT3 R Mount OK FH=9000 Auth=unix
client -> server PORTMAP C GETPORT prog=100003 (NFS) vers=3 proto=TCP
server -> client PORTMAP R GETPORT port=2049
client -> server NFS C NULL3
server -> client NFS R NULL3 
client -> server NFS C FSINFO3 FH=9000
server -> client NFS R FSINFO3 OK
client -> server NFS C GETATTR3 FH=9000
server -> client NFS R GETATTR3 OK

In this trace, the client first requests the mount port number from the portmap service on the NFS server. After the client received the mount port number (33492), that number is used to ping the service on the server. After the client has determined that a service is running on that port number, the client then makes a mount request. When the server responds to this request, it includes the file handle for the file system (9000) that is being mounted. The client then sends a request for the NFS port number. When the client receives the number from the server, it pings the NFS service (nfsd), and requests NFS information about the file system using the file handle.

In the following trace, the client is mounting the file system with the -public option.


client -> server NFS C LOOKUP3 FH=0000 /export/home9/terry
server -> client NFS R LOOKUP3 OK FH=9000
client -> server NFS C FSINFO3 FH=9000
server -> client NFS R FSINFO3 OK
client -> server NFS C GETATTR3 FH=9000
server -> client NFS R GETATTR3 OK

By using the default public file handle (which is 0000), all of the transactions to get information from the portmap service and to determine the NFS port number are skipped.

Effects of the -public Option and NFS URLs When Mounting

Using the -public option can create conditions that cause a mount to fail. Adding an NFS URL can also confuse the situation. The following list describes the specifics of how a file system is mounted when using these options.

Client-Side Failover

Using client-side failover, an NFS client can switch to another server if the server supporting a replicated file system becomes unavailable. The file system can become unavailable if the server it is connected to crashes, if the server is overloaded, or if there is a network fault. The failover, under these conditions, is normally transparent to the user. After it is established, the failover can occur at any time without disrupting the processes running on the client.

Failover requires that the file system be mounted read-only. The file systems must be identical for the failover to occur successfully. See "What Is a Replicated File System?" for a description of what makes a file system identical. A static file system or one that is not changed often is the best candidate for failover.

You cannot use file systems that are mounted using CacheFS with failover. Extra information is stored for each CacheFS file system. This information cannot be updated during failover, so only one of these two features can be used when mounting a file system.

The number of replicas that need to be established for each file system depends on many factors. In general, it is better to have a couple of servers, each supporting multiple subnets rather than have a unique server on each subnet. The process requires checking of each server in the list, so the more servers that are listed, the slower each mount will be.

Failover Terminology

To fully comprehend the process, two terms need to be understood.

What Is a Replicated File System?

For the purposes of failover, a file system can be called a replica when each file is the same size and has the same vnode type as the original file system. Permissions, creation dates, and other file attributes are not considered. If the file size or vnode types are different, the remap fails and the process hangs until the old server becomes available.

You can maintain a replicated file system using rdist, cpio, or another file transfer mechanism. Because updating the replicated file systems causes inconsistency, follow these suggestions for best results:

Failover and NFS Locking

Some software packages require read locks on files. To prevent these products from breaking, read locks on read-only file systems are allowed, but are visible to the client side only. The locks persist through a remap because the server doesn't "know" about them. Because the files should not be changing, you do not need to lock the file on the server side.

Large Files

Starting with 2.6, the Solaris release supports files that are over 2 Gbytes. By default, UFS file systems are mounted with the -largefiles option to support the new functionality. Previous releases cannot handle files of this size. See "How to Disable Large Files on an NFS Server" for instructions.

No changes need to occur on a Solaris 2.6 NFS client to be able to access a large file, if the file system on the server is mounted with the -largefiles option. However, not all 2.6 commands can handle these large files. See largefile(5) for a list of the commands that can handle the large files. Clients that cannot support the NFS version 3 protocol with the large file extensions cannot access any large files. Although clients running the Solaris 2.5 release can use the NFS version 3 protocol, large file support was not included in that release.

How NFS Server Logging Works

NFS server logging provides records of NFS reads and writes, as well as operations that modify the file system. This data can be used to track access to information. In addition, the records can provide a quantitative way to measure interest in the information.

When a file system with logging enabled is accessed, the kernel writes raw data into a buffer file. This data includes a timestamp, the client IP address, the UID of the requestor, the file handle of the file or directory object that is being accessed, and the type of operation that occured.

The nfslogd daemon converts this raw data into ASCII records that are stored in log files. During the conversion the IP addresses are modified to host names and the UIDs are modified to logins if the name service that is enabled can find matches. The file handles are also converted into path names. To accomplish this, the daemon keeps track of the file handles and stores information in a separate file handle to path table, so that the path does not have to be re-identified each time a file handle is accessed. Because there is no tracking of changes to the mappings in the file handle to path table if nfslogd is turned off, it is important to keep the daemon running.

How the WebNFS Service Works

The WebNFS service makes files in a directory available to clients using a public file handle. A file handle is an address generated by the kernel that identifies a file for NFS clients. The public file handle has a predefined value, so the server does not need to generate a file handle for the client. The ability to use this predefined file handle reduces network traffic by eliminating the MOUNT protocol and should increase response time for the clients.

By default the public file handle on an NFS server is established on the root file system. This default provides WebNFS access to any clients that already have mount privileges on the server. You can change the public file handle to point to any file system by using the share command.

When the client has the file handle for the file system, a LOOKUP is run to determine the file handle for the file to be accessed. The NFS protocol allows the evaluation of only one path name component at a time. Each additional level of directory hierarchy requires another LOOKUP. A WebNFS server can evaluate an entire path name with a single transaction, called multicomponent lookup, when the LOOKUP is relative to the public file handle. With multicomponent lookup, the WebNFS server is able to deliver the file handle to the desired file without having to exchange the file handles for each directory level in the path name.

In addition, an NFS client can initiate concurrent downloads over a single TCP connection, which provides quick access without the additional load on the server caused by setting up multiple connections. Although Web browser applications support concurrent downloading of multiple files, each file has its own connection. By using one connection, the WebNFS software reduces the overhead on the server.

If the final component in the path name is a symbolic link to another file system, the client can access the file if the client already has access through normal NFS activities.

Normally, an NFS URL is evaluated relative to the public file handle. The evaluation can be changed to be relative to the server's root file system by adding an additional slash to the beginning of the path. In this example, these two NFS URLs are equivalent if the public file handle has been established on the /export/ftp file system.


nfs://server/junk
nfs://server//export/ftp/junk

How WebNFS Security Negotiation Works

The Solaris 8 release includes a new protocol so a WebNFS client can negotiate a selected security mechanism with a WebNFS server. The new protocol uses security negotiation multicomponent lookup, which is an extension to the multicomponent lookup used in earlier versions of the WebNFS protocol.

The WebNFS client initiates the process by making a regular multicomponent lookup request using the public file handle. Because the client has no knowledge of how the path is protected by the server, the default security mechanism is used. If the default security mechanism is not sufficient, the server replies with an AUTH_TOOWEAK error, indicating that the default mechanism is not valid and the client needs to use a stronger one.

When the client receives the AUTH_TOOWEAK error, it sends a request to the server to determine which security mechanisms are required. If the request succeeds, the server responds with an array of security mechanisms required for the specified path. Depending on the size of the array of security mechanisms, the client might have to make more requests to get the complete array. If the server does not support WebNFS security negotiation, the request fails.

After a successful request, the WebNFS client selects the first security mechanism from the array that it supports and issues a regular multicomponent lookup request using the selected security mechanism to acquire the file handle. All subsequent NFS requests are made using the selected security mechanism and the file handle.

WebNFS Limitations With Web Browser Use

Several functions that a Web site using HTTP can provide are not supported by the WebNFS software. These differences stem from the fact that the NFS server only sends the file, so any special processing must be done on the client. If you need to have one web site configured for both WebNFS and HTTP access, consider the following issues:

Secure NFS System

The NFS environment is a powerful and convenient way to share file systems on a network of different computer architectures and operating systems. However, the same features that make sharing file systems through NFS operation convenient also pose some security problems. Historically, most NFS implementations have used UNIX (or AUTH_SYS) authentication, but stronger authentication methods such as AUTH_DH have also been available. When using UNIX authentication, an NFS server authenticates a file request by authenticating the computer making the request, but not the user, so a client user can run su and impersonate the owner of a file. If DH authentication is used, the NFS server authenticates the user, making this sort of impersonation much harder.

With root access and knowledge of network programming, anyone can introduce arbitrary data into the network and extract any data from the network. The most dangerous attacks are those involving the introduction of data, such as impersonating a user by generating the right packets or recording "conversations" and replaying them later. These attacks affect data integrity. Attacks involving passive eavesdropping--merely listening to network traffic without impersonating anybody--are not as dangerous, as data integrity is not compromised. Users can protect the privacy of sensitive information by encrypting data that goes over the network.

A common approach to network security problems is to leave the solution to each application. A better approach is to implement a standard authentication system at a level that covers all applications.

The Solaris operating environment includes an authentication system at the level of remote procedure call (RPC)--the mechanism on which NFS operation is built. This system, known as Secure RPC, greatly improves the security of network environments and provides additional security to services such as the NFS system. When the NFS system uses the facilities provided by Secure RPC, it is known as a Secure NFS system.

Secure RPC

Secure RPC is fundamental to the Secure NFS system. The goal of Secure RPC is to build a system at least as secure as a time-sharing system (one in which all users share a single computer). A time-sharing system authenticates a user through a login password. With data encryption standard (DES) authentication, the same is true. Users can log in on any remote computer just as they can on a local terminal, and their login passwords are their passports to network security. In a time-sharing environment, the system administrator has an ethical obligation not to change a password to impersonate someone. In Secure RPC, the network administrator is trusted not to alter entries in a database that stores public keys.

You need to be familiar with two terms to understand an RPC authentication system: credentials and verifiers. Using ID badges as an example, the credential is what identifies a person: a name, address, birthday, and so on. The verifier is the photo attached to the badge: you can be sure the badge has not been stolen by checking the photo on the badge against the person carrying it. In RPC, the client process sends both a credential and a verifier to the server with each RPC request. The server sends back only a verifier because the client already "knows" the server's credentials.

RPC's authentication is open ended, which means that a variety of authentication systems can be plugged into it. Currently, there are two systems: UNIX, and DH.

When UNIX authentication is used by a network service, the credentials contain the client's host name, UID, GID, and group-access list, but the verifier contains nothing. Because there is no verifier, a superuser could falsify appropriate credentials, using commands such as su. Another problem with UNIX authentication is that it assumes all computers on a network are UNIX computers. UNIX authentication breaks down when applied to other operating systems in a heterogeneous network.

To overcome the problems of UNIX authentication, Secure RPC uses either DH authentication.

DH Authentication

DH authentication uses the data encryption standard (DES) and Diffie-Hellman public-key cryptography to authenticate both users and computers in the network. DES is a standard encryption mechanism; Diffie-Hellman public-key cryptography is a cipher system that involves two keys: one public and one secret. The public and secret keys are stored in the name space. NIS stores the keys in the publickey map, and NIS+ stores the keys in the cred table. These maps contain the public key and secret key for all potential users. See the Solaris Naming Administration Guide for more information on how to set up the maps and tables.

The security of DH authentication is based on a sender's ability to encrypt the current time, which the receiver can then decrypt and check against its own clock. The timestamp is encrypted with DES. The requirements for this scheme to work are:

If a network runs a time-synchronization program, the time on the client and the server is synchronized automatically. If a time-synchronization program is not available, timestamps can be computed using the server's time instead of the network time. The client asks the server for the time before starting the RPC session, then computes the time difference between its own clock and the server's. This difference is used to offset the client's clock when computing timestamps. If the client and server clocks get out of synchronization to the point where the server begins to reject the client's requests, the DH authentication system on the client resynchronizes with the server.

The client and server arrive at the same encryption key by generating a random conversation key, also known as the session key, and by using public-key cryptography to deduce a common key. The common key is a key that only the client and server are capable of deducing. The conversation key is used to encrypt and decrypt the client's timestamp; the common key is used to encrypt and decrypt the conversation key.

KERB Authentication

Kerberos is an authentication system developed at MIT. Encryption in Kerberos is based on DES. Kerberos support is no longer supplied as part of Secure RPC, but a client-side implementation is included with the Solaris 8 release. XREF

Using Secure RPC With NFS

Be aware of the following points if you plan to use Secure RPC:

Autofs Maps

Autofs uses three types of maps:

Master Autofs Map

The auto_master map associates a directory with a map. It is a master list specifying all the maps that autofs should check. The following example shows what an auto_master file could contain.


Example 31-1 Sample /etc/auto_master File


# Master map for automounter 
# 
+auto_master 
/net            -hosts           -nosuid,nobrowse 
/home           auto_home        -nobrowse 
/xfn            -xfn
/-              auto_direct     -ro  

This example shows the generic auto_master file with one addition for the auto_direct map. Each line in the master map /etc/auto_master has the following syntax:

mount-point map-name [ mount-options ]

mount-point

mount-point is the full (absolute) path name of a directory. If the directory does not exist, autofs creates it if possible. If the directory exists and is not empty, mounting on it hides its contents. In this case, autofs issues a warning.

The notation /- as a mount point indicates that the map in question is a direct map, and no particular mount point is associated with the map as a whole.

map-name

map-name is the map autofs uses to find directions to locations, or mount information. If the name is preceded by a slash (/), autofs interprets the name as a local file. Otherwise, autofs searches for the mount information using the search specified in the name service switch configuration file (/etc/nsswitch.conf). Special maps are also used for /net and /xfn (see "Mount Point /net" and "Mount Point /xfn").

mount-options

mount-options is an optional, comma-separated list of options that apply to the mounting of the entries specified in map-name, unless the entries in map-name list other options. Options for each specific type of file system are listed in the mount man page for that file system (for example, see the mount_nfs(1M) man page for NFS specific mount options). For NFS specific mount points, the bg (background) and fg (foreground) options do not apply.

A line beginning with # is a comment. Everything that follows until the end of the line is ignored.

To split long lines into shorter ones, put a backslash (\) at the end of the line. The maximum number of characters of an entry is 1024.


Note -

If the same mount point is used in two entries, the first entry is used by the automount command. The second entry is ignored.


Mount Point /home

The mount point /home is the directory under which the entries listed in /etc/auto_home (an indirect map) are to be mounted.


Note -

Autofs runs on all computers and supports /net and /home (automounted home directories) by default. These defaults can be overridden by entries in the NIS auto.master map or NIS+ auto_master table, or by local editing of the /etc/auto_master file.


Mount Point /net

Autofs mounts under the directory /net all the entries in the special map -hosts. This is a built-in map that uses only the hosts database. For example, if the computer gumbo is in the hosts database and it exports any of its file systems, the command:


%cd /net/gumbo

changes the current directory to the root directory of the computer gumbo. Autofs can mount only the exported file systems of host gumbo, that is, those on a server available to network users as opposed to those on a local disk. Therefore, all the files and directories on gumbo might not be available through /net/gumbo.

With the /net method of access, the server name is in the path and is location dependent. If you want to move an exported file system from one server to another, the path might no longer work. Instead, you should set up an entry in a map specifically for the file system you want rather than use /net.


Note -

Autofs checks the server's export list only at mount time. After a server's file systems are mounted, autofs does not check with the server again until the server's file systems are automatically unmounted. Therefore, newly exported file systems are not "seen" until the file systems on the client are unmounted and then remounted.


Mount Point /xfn

This mount point provides the autofs directory structure for the resources that are shared through the FNS name space (see the Solaris Naming Setup and Configuration Guide for more information about FNS).

Direct Autofs Maps

A direct map is an automount point. With a direct map, there is a direct association between a mount point on the client and a directory on the server. Direct maps have a full path name and indicate the relationship explicitly. This is a typical /etc/auto_direct map:


/usr/local          -ro \
   /bin                   ivy:/export/local/sun4 \
   /share                 ivy:/export/local/share \
   /src                   ivy:/export/local/src
   /usr/man         -ro   oak:/usr/man \
                          rose:/usr/man \
                          willow:/usr/man 
   /usr/games       -ro   peach:/usr/games 
   /usr/spool/news  -ro   pine:/usr/spool/news \
                          willow:/var/spool/news 

Lines in direct maps have the following syntax:

key [ mount-options ] location

key

key is the path name of the mount point in a direct map.

mount-options

mount-options is the options you want to apply to this particular mount. They are required only if they differ from the map default. Options for each specific type of file system are listed in the mount man page for that file system (for example, see the mount_cachefs(1M) man page for CacheFS specific mount options).

location

location is the location of the file system, specified (one or more) as server:pathname for NFS file systems or :devicename for High Sierra file systems (HSFS).


Note -

The pathname should not include an automounted mount point; it should be the actual absolute path to the file system. For instance, the location of a home directory should be listed as server:/export/home/username, not as server:/home/username.


As in the master map, a line beginning with # is a comment. All the text that follows until the end of the line is ignored. Put a backslash at the end of the line to split long lines into shorter ones.

Of all the maps, the entries in a direct map most closely resemble the corresponding entries in /etc/vfstab (vfstab contains a list of all file systems to be mounted). An entry that appears in /etc/vfstab as:


dancer:/usr/local - /usr/local/tmp nfs - yes ro 

appears in a direct map as:


/usr/local/tmp     -ro     dancer:/usr/local 

Note -

No concatenation of options occurs between the automounter maps. Any options added to an automounter map override all options listed in maps that are searched earlier. For instance, options included in the auto_master map would be overridden by corresponding entries in any other map.


See "How Autofs Selects the Nearest Read-Only Files for Clients (Multiple Locations)" for other important features associated with this type of map.

Mount Point /-

In Example 31-1, the mount point /- tells autofs not to associate the entries in auto_direct with any specific mount point. Indirect maps use mount points defined in the auto_master file. Direct maps use mount points specified in the named map. (Remember, in a direct map the key, or mount point, is a full path name.)

An NIS or NIS+ auto_master file can have only one direct map entry because the mount point must be a unique value in the name space. An auto_master file that is a local file can have any number of direct map entries, as long as entries are not duplicated.

Indirect Autofs Maps

An indirect map uses a substitution value of a key to establish the association between a mount point on the client and a directory on the server. Indirect maps are useful for accessing specific file systems, like home directories. The auto_home map is an example of an indirect map.

Lines in indirect maps have the following general syntax:

key [ mount-options ] location

Table 31-3 Table Caption

key

key is a simple name (no slashes) in an indirect map.

mount-options

mount-options is the options you want to apply to this particular mount. They are required only if they differ from the map default. Options for each specific type of file system are listed in the mount man page for that file system (for example, see the mount_nfs(1M) man page for NFS specific mount options).

location

location is the location of the file system, specified (one or more) as server:pathname.


Note -

The pathname should not include an automounted mount point; it should be the actual absolute path to the file system. For instance, the location of a directory should be listed as server:/usr/local, not as server:/net/server/usr/local.


As in the master map, a line beginning with # is a comment. All the text that follows until the end of the line is ignored. Put a backslash (\) at the end of the line to split long lines into shorter ones. Example 31-1 shows an auto_master map that contains the entry:


/home      auto_home        -nobrowse    

auto_home is the name of the indirect map that contains the entries to be mounted under /home. A typical auto_home map might contain:


david                  willow:/export/home/david
rob                    cypress:/export/home/rob
gordon                 poplar:/export/home/gordon
rajan                  pine:/export/home/rajan
tammy                  apple:/export/home/tammy
jim                    ivy:/export/home/jim
linda    -rw,nosuid    peach:/export/home/linda

As an example, assume that the previous map is on host oak. If user linda has an entry in the password database specifying her home directory as /home/linda, whenever she logs in to computer oak, autofs mounts the directory /export/home/linda residing on the computer peach. Her home directory is mounted read-write, nosuid.

Assume the following conditions occur: User linda's home directory is listed in the password database as /home/linda. Anybody, including Linda, has access to this path from any computer set up with the master map referring to the map in the previous example.

Under these conditions, user linda can run login or rlogin on any of these computers and have her home directory mounted in place for her.

Furthermore, now Linda can also type the following command:


% cd ~david

autofs mounts David's home directory for her (if all permissions allow).


Note -

No concatenation of options between the automounter maps. Any options added to an automounter map override all options listed in maps that are searched earlier. For instance, options included in the auto_master map are overidden by corresponding entries in any other map.


On a network without a name service, you have to change all the relevant files (such as /etc/passwd) on all systems on the network to accomplish this. With NIS, make the changes on the NIS master server and propagate the relevant databases to the slave servers. On a network running NIS+, propagating the relevant databases to the slave servers is done automatically after the changes are made.

How Autofs Works

Autofs is a client-side service that automatically mounts the appropriate file system. When a client attempts to access a file system that is not presently mounted, the autofs file system intercepts the request and calls automountd to mount the requested directory. The automountd daemon locates the directory, mounts it within autofs, and replies. On receiving the reply, autofs allows the waiting request to proceed. Subsequent references to the mount are redirected by the autofs--no further participation is required by automountd until the file system is automatically unmounted by autofs after a period of inactivity.

The components that work together to accomplish automatic mounting are:

The automount command, called at system startup time, reads the master map file auto_master to create the initial set of autofs mounts. These autofs mounts are not automatically mounted at startup time. They are points under which file systems are mounted in the future. These points are also known as trigger nodes.

After the autofs mounts are set up, they can trigger file systems to be mounted under them. For example, when autofs receives a request to access a file system that is not currently mounted, autofs calls automountd, which actually mounts the requested file system.

Starting with the Solaris 2.5 release, the automountd daemon is completely independent from the automount command. Because of this separation, it is possible to add, delete, or change map information without first having to stop and start the automountd daemon process.

After initially mounting autofs mounts, the automount command is used to update autofs mounts as necessary, by comparing the list of mounts in the auto_master map with the list of mounted file systems in the mount table file /etc/mnttab (formerly /etc/mtab) and making the appropriate changes. This allows system administrators to change mount information within auto_master and have those changes used by the autofs processes without having to stop and restart the autofs daemon. After the file system is mounted, further access does not require any action from automountd until the file system is automatically unmounted.

Unlike mount, automount does not read the /etc/vfstab file (which is specific to each computer) for a list of file systems to mount. The automount command is controlled within a domain and on computers through the name space or local files.

This is a simplified overview of how autofs works:

The automount daemon automountd starts at boot time from the /etc/init.d/autofs script (see Figure 31-1). This script also runs the automount command, which reads the master map (see "How Autofs Starts the Navigation Process (Master Map)") and installs autofs mount points.

Figure 31-1 /etc/init.d/autofs Script Starts automount

Graphic

Autofs is a kernel file system that supports automatic mounting and unmounting.

When a request is made to access a file system at an autofs mount point:

  1. Autofs intercepts the request.

  2. Autofs sends a message to the automountd for the requested file system to be mounted.

  3. automountd locates the file system information in a map, creates the trigger nodes, and performs the mount.

  4. Autofs allows the intercepted request to proceed.

  5. Autofs unmounts the file system after a period of inactivity.


Note -

Mounts managed through the autofs service should not be manually mounted or unmounted. Even if the operation is successful, the autofs service does not check that the object has been unmounted, resulting in possible inconsistencies. A reboot clears all of the autofs mount points.


How Autofs Navigates Through the Network (Maps)

Autofs searches a series of maps to navigate its way through the network. Maps are files that contain information such as the password entries of all users on a network or the names of all host computers on a network, that is, network-wide equivalents of UNIX administration files. Maps are available locally or through a network name service like NIS or NIS+. You create maps to meet the needs of your environment using the Solstice System Management Tools. See "Modifying How Autofs Navigates the Network (Modifying Maps)".

How Autofs Starts the Navigation Process (Master Map)

The automount command reads the master map at system startup. Each entry in the master map is a direct or indirect map name, its path, and its mount options, as shown in Figure 31-2. The specific order of the entries is not important. automount compares entries in the master map with entries in the mount table to generate a current list.

Figure 31-2 Navigation Through the Master Map

Graphic

Autofs Mount Process

What the autofs service does when a mount request is triggered depends on how the automounter maps are configured. The mount process is generally the same for all mounts, but the final result changes with the mount point specified and the complexity of the maps. Starting with the Solaris 2.6 release, the mount process has also been changed to include the creation of the trigger nodes.

Simple Autofs Mount

To help explain the autofs mount process, assume that the following files are installed.


$ cat /etc/auto_master
# Master map for automounter
#
+auto_master
/net        -hosts        -nosuid,nobrowse
/home       auto_home     -nobrowse
/xfn        -xfn
/share      auto_share
$ cat /etc/auto_share
# share directory map for automounter
#
ws          gumbo:/export/share/ws

When the /share directory is accessed, the autofs service creates a trigger node for /share/ws, which can be seen in /etc/mnttab as an entry that resembles the following entry:


-hosts  /share/ws     autofs  nosuid,nobrowse,ignore,nest,dev=###

When the /share/ws directory is accessed, the autofs service completes the process with these steps:

  1. Pings the server's mount service to see if it's alive.

  2. Mounts the requested file system under /share. Now /etc/mnttab file contains the following entries:


    -hosts  /share/ws     autofs  nosuid,nobrowse,ignore,nest,dev=###
    gumbo:/export/share/ws /share/ws   nfs   nosuid,dev=####    #####

Hierarchical Mounting

When multiple layers are defined in the automounter files, the mount process becomes more complex. If the /etc/auto_shared file from the previous example is expanded to contain:


# share directory map for automounter
#
ws       /       gumbo:/export/share/ws
         /usr    gumbo:/export/share/ws/usr

The mount process is basically the same as the previous example when the /share/ws mount point is accessed. In addition, a trigger node to the next level (/usr) is created in the /share/ws file system so that the next level can be mounted if it is accessed. In this example, /export/share/ws/usr must exist on the NFS server for the trigger node to be created.


Caution - Caution -

Do not use the -soft option when specifying hierarchical layers. Refer to "Autofs Unmounting" for an explanation of this limitation.


Autofs Unmounting

The unmounting that occurs after a certain amount of idle time is from the bottom up (reverse order of mounting). If one of the directories at a higher level in the hierarchy is busy, only file systems below that directory are unmounted. During the unmounting process, any trigger nodes are removed and then the file system is unmounted. If the file system is busy, the unmount fails and the trigger nodes are reinstalled.


Caution - Caution -

Do not use the -soft option when specifying hierarchical layers. If the -soft option is used, requests to reinstall the trigger nodes can time out. The failure to reinstall the trigger notes leaves no access to the next level of mounts. The only way to clear this problem is to have the automounter unmount all of the components in the hierarchy, either by waiting for the file systems to be automatically unmounted or by rebooting the system.


How Autofs Selects the Nearest Read-Only Files for Clients (Multiple Locations)

In the example of a direct map, which was:


/usr/local          -ro \
   /bin                   ivy:/export/local/sun4\
   /share                 ivy:/export/local/share\
   /src                   ivy:/export/local/src
/usr/man            -ro   oak:/usr/man \
                          rose:/usr/man \
                          willow:/usr/man
/usr/games          -ro   peach:/usr/games
/usr/spool/news     -ro   pine:/usr/spool/news \
                          willow:/var/spool/news 

The mount points /usr/man and /usr/spool/news list more than one location (three for the first, two for the second). This means any of the replicated locations can provide the same service to any user. This procedure makes sense only when you mount a file system that is read-only, as you must have some control over the locations of files you write or modify. You don't want to modify files on one server on one occasion and, minutes later, modify the "same" file on another server. The benefit is that the best available server is used automatically without any effort required by the user.

If the file systems are configured as replicas (see "What Is a Replicated File System?"), the clients have the advantage of using failover. Not only is the best server automatically determined, but if that server becomes unavailable, the client automatically uses the next-best server. Failover was first implemented in the Solaris 2.6 release.

An example of a good file system to configure as a replica is man pages. In a large network, more than one server can export the current set of manual pages. Which server you mount them from does not matter, as long as the server is running and exporting its file systems. In the previous example, multiple mount locations are expressed as a list of mount locations in the map entry.


/usr/man -ro oak:/usr/man rose:/usr/man willow:/usr/man 

Here you can mount the man pages from the servers oak, rose, or willow. Which server is best depends on a number of factors including: the number of servers supporting a particular NFS protocol level, the proximity of the server, and weighting.

During the sorting process, a count of the number of servers supporting the NFS version 2 and NFS version 3 protocols is made. Whichever protocol is supported on the most servers becomes the protocol supported by default. This provides the client with the maximum number of servers to depend on.

After the largest subset of servers with the same protocol version is found, that server list is sorted by proximity. Servers on the local subnet are given preference over servers on a remote subnet. The closest server is given preference, which reduces latency and network traffic. Figure 31-3 illustrates server proximity.

Figure 31-3 Server Proximity

Graphic

If several servers supporting the same protocol are on the local subnet, the time to connect to each server is determined and the fastest is used. The sorting can also be influenced by using weighting (see "Autofs and Weighting").

If version 3 servers are more abundant, the sorting process becomes more complex. Normally, servers on the local subnet are given preference over servers on a remote subnet. A version 2 server can complicate matters, as it might be closer than the nearest version 3 server. If there is a version 2 server on the local subnet and the closest version 3 server is on a remote subnet, the version 2 server is given preference. This preference is only checked if there are more version 3 servers than version 2 servers. If there are more version 2 servers, only a version 2 server is selected.

With failover, the sorting is checked once at mount time to select one server from which to mount, and again anytime the mounted server becomes unavailable. Multiple locations are useful in an environment where individual servers might not export their file systems temporarily.

This feature is particularly useful in a large network with many subnets. Autofs chooses the nearest server and therefore confines NFS network traffic to a local network segment. In servers with multiple network interfaces, list the host name associated with each network interface as if it were a separate server. Autofs selects the nearest interface to the client.

Autofs and Weighting

You can influence the selection of servers at the same proximity level by adding a weighting value to the autofs map. For example:


/usr/man -ro oak,rose(1),willow(2):/usr/man

The numbers in parentheses indicate a weighting. Servers without a weighting have a value of zero (most likely to be selected). The higher the weighting value, the lower the chance the server will be selected.


Note -

All other server selection factors are more important than weighting. Weighting is only considered when selecting between servers with the same network proximity.


Variables in a Map Entry

You can create a client-specific variable by prefixing a dollar sign ($) to its name. This helps you to accommodate different architecture types accessing the same file system location. You can also use curly braces to delimit the name of the variable from appended letters or digits. Table 31-4 shows the predefined map variables.

Table 31-4 Predefined Map Variables

Variable 

Meaning 

Derived From 

Example 

ARCH

Architecture type 

uname -m

sun4

CPU

Processor type 

uname -p

sparc

HOST

Host name 

uname -n

dinky

OSNAME

Operating system name 

uname -s

SunOS

OSREL

Operating system release 

uname -r

5.4

OSVERS

Operating system version (version of the release) 

uname -v

FCS1.0

You can use variables anywhere in an entry line except as a key. For instance, if you have a file server exporting binaries for SPARC and IA architectures from /usr/local/bin/sparc and /usr/local/bin/x86 respectively, the clients can mount through a map entry like the following:


/usr/local/bin	   -ro	server:/usr/local/bin/$CPU

Now the same entry for all clients applies to all architectures.


Note -

Most applications written for any of the sun4 architectures can run on all sun4 platforms, so the -ARCH variable is hard-coded to sun4 instead of sun4m.


Maps That Refer to Other Maps

A map entry +mapname used in a file map causes automount to read the specified map as if it were included in the current file. If mapname is not preceded by a slash, autofs treats the map name as a string of characters and uses the name service switch policy to find it. If the path name is an absolute path name, automount checks a local map of that name. If the map name starts with a dash (-), automount consults the appropriate built-in map, such as xfn or hosts.

This name service switch file contains an entry for autofs labeled as automount, which contains the order in which the name services are searched. The following file is an example of a name service switch file:


#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf;
# it uses NIS (YP) in conjunction with files.
#
# "hosts:" and "services:" in this file are used only if the /etc/netconfig
# file contains "switch.so" as a nametoaddr library for "inet" transports.
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:         files nis
group:          files nis

# consult /etc "files" only if nis is down.
hosts:          nis [NOTFOUND=return] files
networks:       nis [NOTFOUND=return] files
protocols:      nis [NOTFOUND=return] files
rpc:            nis [NOTFOUND=return] files
ethers:         nis [NOTFOUND=return] files
netmasks:       nis [NOTFOUND=return] files
bootparams:     nis [NOTFOUND=return] files
publickey:      nis [NOTFOUND=return] files
netgroup:       nis
automount:      files nis
aliases:        files nis
# for efficient getservbyname() avoid nis
services:       files nis 

In this example, the local maps are searched before the NIS maps, so you can have a few entries in your local /etc/auto_home map for the most commonly accessed home directories, and use the switch to fall back to the NIS map for other entries.


bill               cs.csc.edu:/export/home/bill
bonny              cs.csc.edu:/export/home/bonny

After consulting the included map, if no match is found, automount continues scanning the current map. This means you can add more entries after a + entry.


bill               cs.csc.edu:/export/home/bill
bonny              cs.csc.edu:/export/home/bonny
+auto_home 

The map included can be a local file (remember, only local files can contain + entries) or a built-in map:


+auto_home_finance      # NIS+ map
+auto_home_sales        # NIS+ map
+auto_home_engineering  # NIS+ map
+/etc/auto_mystuff      # local map
+auto_home              # NIS+ map
+-hosts                 # built-in hosts map 

Note -

You cannot use + entries in NIS+ or NIS maps.


Executable Autofs Maps

You can create an autofs map that will execute some commands to generate the autofs mount points. You could benefit from using an executable autofs map if you need to be able to create the autofs structure from a database or a flat file. The disadvantage to using an executable map is that the map will need to be installed on each host. An executable map cannot be included in either the NIS or the NIS+ name service.

The executable map must have an entry in the auto_master file.


/execute    auto_execute

Here is an example of an executable map:


#!/bin/ksh
#
# executable map for autofs
#

case $1 in
	         src)  echo '-nosuid,hard bee:/export1' ;;
esac

For this example to work, the file must be installed as /etc/auto_execute and must have the executable bit set (set permissions to 744). Under these circumstances running the following command:


% ls /execute/src

causes the /export1 file system from bee to be mounted.

Modifying How Autofs Navigates the Network (Modifying Maps)

You can modify, delete, or add entries to maps to meet the needs of your environment. As applications and other file systems that users require change their location, the maps must reflect those changes. You can modify autofs maps at any time. Whether your modifications take effect the next time automountd mounts a file system depends on which map you modify and what kind of modification you make.

Default Autofs Behavior With Name Services

Booting invokes autofs using the /etc/init.d/autofs script and checks for the master auto_master map (subject to the rules discussed subsequently).

Autofs uses the name service specified in the automount entry of the /etc/nsswitch.conf file. If NIS+ is specified, as opposed to local files or NIS, all map names are used as is. If NIS is selected and autofs cannot find a map that it needs, but finds a map name that contains one or more underscores, the underscores are changed to dots, which allows the old NIS file names to work. Then autofs checks the map again, as shown in Figure 31-4.

Figure 31-4 How Autofs Uses the Name Service

Graphic

The screen activity for this session would look like the following example.


$ grep /home /etc/auto_master
/home           auto_home

$ ypmatch brent auto_home
Can't match key brent in map auto_home.  Reason: no such map in
server's domain.

$ ypmatch brent auto.home
diskus:/export/home/diskus1/&

If "files" is selected as the name service, all maps are assumed to be local files in the /etc directory. Autofs interprets a map name that begins with a slash (/) as local regardless of which name service it uses.

Autofs Reference

The rest of this chapter describes more advanced autofs features and topics.

Metacharacters

Autofs recognizes some characters as having a special meaning. Some are used for substitutions, some to protect other characters from the autofs map parser.

Ampersand (&)

If you have a map with many subdirectories specified, as in the following, consider using string substitutions.


john        willow:/home/john
mary        willow:/home/mary
joe         willow:/home/joe
able        pine:/export/able
baker       peach:/export/baker

You can use the ampersand character (&) to substitute the key wherever it appears. If you use the ampersand, the previous map changes to:


john        willow:/home/&
mary        willow:/home/&
joe         willow:/home/&
able        pine:/export/&
baker       peach:/export/&

You could also use key substitutions in a direct map, in situations like this:


/usr/man						willow,cedar,poplar:/usr/man

which you can also write as:


/usr/man						willow,cedar,poplar:&

Notice that the ampersand substitution uses the whole key string, so if the key in a direct map starts with a / (as it should), the slash is carried over, and you could not do, for example, the following:


/progs				&1,&2,&3:/export/src/progs 

because autofs would interpret it as:


/progs 				/progs1,/progs2,/progs3:/export/src/progs
Asterisk (*)

You can use the universal substitute character, the asterisk (*), to match any key. You could mount the /export file system from all hosts through this map entry.


*						&:/export

Each ampersand is substituted by the value of any given key. Autofs interprets the asterisk as an end-of-file character.

Special Characters

If you have a map entry that contains special characters, you might have to mount directories that have names which confuse the autofs map parser. The autofs parser is sensitive to names containing colons, commas, spaces, and so on. These names should be enclosed in double quotations, as in the following:


/vms    -ro    vmsserver: -  -  - "rc0:dk1 - "
/mac    -ro    gator:/ - "Mr Disk - "