System Administration Guide: Network Services

Chapter 6 Accessing Network File Systems (Reference)

This chapter describes the NFS commands, as well as the different parts of the NFS environment and how these parts work together.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones for more information.


NFS Files

You need several files to support NFS activities on any computer. Many of these files are ASCII, but some of the files are data files. Table 6–1 lists these files and their functions.

Table 6–1 NFS Files

File Name 

Function 

/etc/default/autofs

Lists configuration information for the autofs environment. 

/etc/default/fs

Lists the default file-system type for local file systems. 

/etc/default/nfs

Lists configuration information for lockd and nfsd. For more information, refer to Keywords for the /etc/default/nfs File and the nfs(4) man page.

/etc/default/nfslogd

Lists configuration information for the NFS server logging daemon, nfslogd.

/etc/dfs/dfstab

Lists the local resources to be shared. 

/etc/dfs/fstypes

Lists the default file-system types for remote file systems. 

/etc/dfs/sharetab

Lists the local and remote resources that are shared. See the sharetab(4) man page. Do not edit this file.

/etc/mnttab

Lists file systems that are currently mounted, including automounted directories. See the mnttab(4) man page. Do not edit this file.

/etc/netconfig

Lists the transport protocols. Do not edit this file.

/etc/nfs/nfslog.conf

Lists general configuration information for NFS server logging. 

/etc/nfs/nfslogtab

Lists information for log postprocessing by nfslogd. Do not edit this file.

/etc/nfssec.conf

Lists NFS security services. 

/etc/rmtab

Lists file systems that are remotely mounted by NFS clients. See the rmtab(4) man page. Do not edit this file.

/etc/vfstab

Defines file systems to be mounted locally. See the vfstab(4) man page.

The first entry in /etc/dfs/fstypes is often used as the default file-system type for remote file systems. This entry defines the NFS file-system type as the default.

Only one entry is in /etc/default/fs: the default file-system type for local disks. You can determine the file-system types that are supported on a client or server by checking the files in /kernel/fs.

/etc/default/autofs File

Starting in the Solaris 10 release, you can use the /etc/default/autofs file to configure your autofs environment. Specifically, this file provides an additional way to configure your autofs commands and autofs daemons. The same specifications you would make on the command line can be made in this configuration file. However, unlike the specifications you would make on the command line, this file preserves your specifications, even during upgrades to your operating system. Additionally, you are no longer required to update critical startup files to ensure that the existing behavior of your autofs environment is preserved. You can make your specifications by providing values for the following keywords:

AUTOMOUNT_TIMEOUT

Sets the duration for a file system to remain idle before the file system is unmounted. This keyword is the equivalent of the -t argument for the automount command. The default value is 600.

AUTOMOUNT_VERBOSE

Provides notification of autofs mounts, unmounts, and other nonessential events. This keyword is the equivalent of the -v argument for automount. The default value is FALSE.

AUTOMOUNTD_VERBOSE

Logs status messages to the console and is the equivalent of the -v argument for the automountd daemon. The default value is FALSE.

AUTOMOUNTD_NOBROWSE

Turns browsing on or off for all autofs mount points and is the equivalent of the -n argument for automountd. The default value is FALSE.

AUTOMOUNTD_TRACE

Expands each remote procedure call (RPC) and displays the expanded RPC on standard output. This keyword is the equivalent of the -T argument for automountd. The default value is 0. Values can range from 0 to 5.

AUTOMOUNTD_ENV

Permits you to assign different values to different environments. This keyword is the equivalent of the -D argument for automountd. The AUTOMOUNTD_ENV keyword can be used multiple times. However, you must use separate lines for each environment assignment.

For more information, refer to the man pages for automount(1M) and automountd(1M). For procedural information, refer to How to Use the /etc/default/autofs File.

Keywords for the /etc/default/nfs File

In NFS version 4, the following keywords can be set in the /etc/default/nfs file. These keywords control the NFS protocols that are used by both the client and server.

NFS_SERVER_VERSMIN

Sets the minimum version of the NFS protocol to be registered and offered by the server. Starting in the Solaris 10 release, the default is 2. Other valid values include 3 or 4. Refer to Setting Up NFS Services.

NFS_SERVER_VERSMAX

Sets the maximum version of the NFS protocol to be registered and offered by the server. Starting in the Solaris 10 release, the default is 4. Other valid values include 2 or 3. Refer to Setting Up NFS Services.

NFS_CLIENT_VERSMIN

Sets the minimum version of the NFS protocol to be used by the NFS client. Starting in the Solaris 10 release, the default is 2. Other valid values include 3 or 4. Refer to Setting Up NFS Services.

NFS_CLIENT_VERSMAX

Sets the maximum version of the NFS protocol to be used by the NFS client. Starting in the Solaris 10 release, the default is 4. Other valid values include 2 or 3. Refer to Setting Up NFS Services.

NFS_SERVER_DELEGATION

Controls whether the NFS version 4 delegation feature is enabled for the server. If this feature is enabled, the server attempts to provide delegations to the NFS version 4 client. By default, server delegation is enabled. To disable server delegation, see How to Select Different Versions of NFS on a Server. For more information, refer to Delegation in NFS Version 4.

NFSMAPID_DOMAIN

Sets a common domain for clients and servers. Overrides the default behavior of using the local DNS domain name. For task information, refer to Setting Up NFS Services. Also, see nfsmapid Daemon.

/etc/default/nfslogd File

This file defines some of the parameters that are used when using NFS server logging. The following parameters can be defined.

CYCLE_FREQUENCY

Determines the number of hours that must pass before the log files are cycled. The default value is 24 hours. This option is used to prevent the log files from growing too large.

IDLE_TIME

Sets the number of seconds nfslogd should sleep before checking for more information in the buffer file. This parameter also determines how often the configuration file is checked. This parameter, along with MIN_PROCESSING_SIZE, determines how often the buffer file is processed. The default value is 300 seconds. Increasing this number can improve performance by reducing the number of checks.

MAPPING_UPDATE_INTERVAL

Specifies the number of seconds between updates of the records in the file-handle-to-path mapping tables. The default value is 86400 seconds or one day. This parameter helps keep the file-handle-to-path mapping tables up-to-date without having to continually update the tables.

MAX_LOGS_PRESERVE

Determines the number of log files to be saved. The default value is 10.

MIN_PROCESSING_SIZE

Sets the minimum number of bytes that the buffer file must reach before processing and writing to the log file. This parameter, along with IDLE_TIME, determines how often the buffer file is processed. The default value is 524288 bytes. Increasing this number can improve performance by reducing the number of times the buffer file is processed.

PRUNE_TIMEOUT

Selects the number of hours that must pass before a file-handle-to-path mapping record times out and can be reduced. The default value is 168 hours or 7 days.

UMASK

Specifies the file mode creation mask for the log files that are created by nfslogd. The default value is 0137.

/etc/nfs/nfslog.conf File

This file defines the path, file names, and type of logging to be used by nfslogd. Each definition is associated with a tag. Starting NFS server logging requires that you identify the tag for each file system. The global tag defines the default values. You can use the following parameters with each tag as needed.

defaultdir=path

Specifies the default directory path for the logging files. Unless you specify differently, the default directory is /var/nfs.

log=path/filename

Sets the path and file name for the log files. The default is /var/nfs/nfslog.

fhtable=path/filename

Selects the path and file name for the file-handle-to-path database files. The default is /var/nfs/fhtable.

buffer=path/filename

Determines the path and file name for the buffer files. The default is /var/nfs/nfslog_workbuffer.

logformat=basic|extended

Selects the format to be used when creating user-readable log files. The basic format produces a log file that is similar to some ftpd daemons. The extended format gives a more detailed view.

If the path is not specified, the path that is defined by defaultdir is used. Also, you can override defaultdir by using an absolute path.

To identify the files more easily, place the files in separate directories. Here is an example of the changes that are needed.


% cat /etc/nfs/nfslog.conf
#ident  "@(#)nfslog.conf        1.5     99/02/21 SMI"
#
  .
  .
# NFS server log configuration file.
#

global  defaultdir=/var/nfs \
        log=nfslog fhtable=fhtable buffer=nfslog_workbuffer

publicftp log=logs/nfslog fhtable=fh/fhtables buffer=buffers/workbuffer

In this example, any file system that is shared with log=publicftp uses the following values:

For procedural information, refer to How to Enable NFS Server Logging.

NFS Daemons

To support NFS activities, several daemons are started when a system goes into run level 3 or multiuser mode. The mountd and nfsd daemons are run on systems that are servers. The automatic startup of the server daemons depends on the existence of entries that are labeled with the NFS file-system type in /etc/dfs/sharetab. To support NFS file locking, the lockd and statd daemons are run on NFS clients and servers. However, unlike previous versions of NFS, in NFS version 4, the daemons lockd, statd, mountd, and nfslogd are not used.

This section describes the following daemons.

automountd Daemon

This daemon handles the mounting and unmounting requests from the autofs service. The syntax of the command is as follows:

automountd [ -Tnv ] [ -D name=value ]

The command behaves in the following ways:

The default value for the automount map is /etc/auto_master. Use the -T option for troubleshooting.

lockd Daemon

This daemon supports record-locking operations on NFS files. The lockd daemon manages RPC connections between the client and the server for the Network Lock Manager (NLM) protocol. The daemon is normally started without any options. You can use three options with this command. See the lockd(1M) man page. These options can either be used from the command line or by editing the appropriate string in /etc/default/nfs. The following are descriptions of keywords that can be set in the /etc/default/nfs file.


Note –

Starting in the Solaris 10 release, the LOCKD_GRACE_PERIOD keyword and the -g option have been deprecated. The deprecated keyword is replaced with the new keyword GRACE_PERIOD. If both keywords are set, the value for GRACE_PERIOD overrides the value for LOCKD_GRACE_PERIOD. See the description of GRACE_PERIOD that follows.


Like LOCKD_GRACE_PERIOD, GRACE_PERIOD=graceperiod in /etc/default/nfs sets the number of seconds after a server reboot that the clients have to reclaim both NFS version 3 locks, provided by NLM, and version 4 locks. Thus, the value for GRACE_PERIOD controls the length of the grace period for lock recovery, for both NFS version 3 and NFS version 4.

The LOCKD_RETRANSMIT_TIMEOUT=timeout parameter in /etc/default/nfs selects the number of seconds to wait before retransmitting a lock request to the remote server. This option affects the NFS client-side service. The default value for timeout is 15 seconds. Decreasing the timeout value can improve response time for NFS clients on a “noisy” network. However, this change can cause additional server load by increasing the frequency of lock requests. The same parameter can be used from the command line by starting the daemon with the -t timeout option.

The LOCKD_SERVERS=nthreads parameter in /etc/default/nfs specifies the maximum number of concurrent threads that the server handles per connection. Base the value for nthreads on the load that is expected on the NFS server. The default value is 20. Each NFS client that uses TCP uses a single connection with the NFS server. Therefore, each client can use a maximum of 20 concurrent threads on the server.

All NFS clients that use UDP share a single connection with the NFS server. Under these conditions, you might have to increase the number of threads that are available for the UDP connection. A minimum calculation would be to allow two threads for each UDP client. However, this number is specific to the workload on the client, so two threads per client might not be sufficient. The disadvantage to using more threads is that when the threads are used, more memory is used on the NFS server. If the threads are never used, however, increasing nthreads has no effect. The same parameter can be used from the command line by starting the daemon with the nthreads option.

mountd Daemon

This daemon handles file-system mount requests from remote systems and provides access control. The mountd daemon checks /etc/dfs/sharetab to determine which file systems are available for remote mounting and which systems are allowed to do the remote mounting. You can use the -v option and the -r option with this command. See the mountd(1M) man page.

The -v option runs the command in verbose mode. Every time an NFS server determines the access that a client should be granted, a message is printed on the console. The information that is generated can be useful when trying to determine why a client cannot access a file system.

The -r option rejects all future mount requests from clients. This option does not affect clients that already have a file system mounted.


Note –

NFS version 4 does not use this daemon.


nfs4cbd Daemon

nfs4cbd, which is for the exclusive use of the NFS version 4 client, manages the communication endpoints for the NFS version 4 callback program. The daemon has no user-accessible interface. For more information, see the nfs4cbd(1M) man page.

nfsd Daemon

This daemon handles other client file-system requests. You can use several options with this command. See the nfsd(1M) man page for a complete listing. These options can either be used from the command line or by editing the appropriate string in /etc/default/nfs.

The NFSD_LISTEN_BACKLOG=length parameter in /etc/default/nfs sets the length of the connection queue over connection-oriented transports for NFS and TCP. The default value is 32 entries. The same selection can be made from the command line by starting nfsd with the -l option.

The NFSD_MAX_CONNECTIONS=#-conn parameter in /etc/default/nfs selects the maximum number of connections per connection-oriented transport. The default value for #-conn is unlimited. The same parameter can be used from the command line by starting the daemon with the -c #-conn option.

The NFSD_SERVER=nservers parameter in /etc/default/nfs selects the maximum number of concurrent requests that a server can handle. The default value for nservers is 16. The same selection can be made from the command line by starting nfsd with the nservers option.

Unlike older versions of this daemon, nfsd does not spawn multiple copies to handle concurrent requests. Checking the process table with ps only shows one copy of the daemon running.

nfslogd Daemon

This daemon provides operational logging. NFS operations that are logged against a server are based on the configuration options that are defined in /etc/default/nfslogd. When NFS server logging is enabled, records of all RPC operations on a selected file system are written to a buffer file by the kernel. Then nfslogd postprocesses these requests. The name service switch is used to help map UIDs to logins and IP addresses to host names. The number is recorded if no match can be found through the identified name services.

Mapping of file handles to path names is also handled by nfslogd. The daemon tracks these mappings in a file-handle-to-path mapping table. One mapping table exists for each tag that is identified in /etc/nfs/nfslogd. After post-processing, the records are written to ASCII log files.


Note –

NFS version 4 does not use this daemon.


nfsmapid Daemon

Version 4 of the NFS protocol (RFC3530) changed the way user or group identifiers (UID or GID) are exchanged between the client and server. The protocol requires that a file's owner and group attributes be exchanged between an NFS version 4 client and an NFS version 4 server as strings in the form of user@nfsv4_domain or group@nfsv4_domain, respectively.

For example, user known_user has a UID 123456 on an NFS version 4 client whose fully qualified hostname is system.example.com. For the client to make requests to the NFS version 4 server, the client must map the UID 123456 to known_user@example.com and then send this attribute to the NFS version 4 server. The NFS version 4 server expects to receive user and group file attributes in the user_or_group@nfsv4_domain format. After the server receives known_user@example.com from the client, the server maps the string to the local UID 123456, which is understood by the underlying file system. This functionality assumes that every UID and GID in the network is unique and that the NFS version 4 domains on the client match the NFS version 4 domains on the server.


Note –

If the server does not recognize the given user or group name, even if the NFS version 4 domains match, the server is unable to map the user or group name to its unique ID, an integer value. Under such circumstances, the server maps the inbound user or group name to the nobody user. To prevent such occurrences, administrators should avoid making special accounts that only exist on the NFS version 4 client.


The NFS version 4 client and server are both capable of performing integer-to-string and string-to-integer conversions. For example, in response to a GETATTR operation, the NFS version 4 server maps UIDs and GIDs obtained from the underlying file system into their respective string representation and sends this information to the client. Alternately, the client must also map UIDs and GIDs into string representations. For example, in response to the chown command, the client maps the new UID or GID to a string representation before sending a SETATTR operation to the server.

Note, however, that the client and server respond differently to unrecognized strings:

Configuration Files and nfsmapid

The following describes how the nfsmapid daemon uses the /etc/nsswitch.conf and /etc/resolv.conf files:

Precedence Rules

    For nfsmapid to work properly, NFS version 4 clients and servers must have the same domain. To ensure matching NFS version 4 domains, nfsmapid follows these strict precedence rules:

  1. The daemon first checks the /etc/default/nfs file for a value that has been assigned to the NFSMAPID_DOMAIN keyword. If a value is found, the assigned value takes precedence over any other settings. The assigned value is appended to the outbound attribute strings and is compared against inbound attribute strings. For more information about keywords in the /etc/default/nfs file, see Keywords for the /etc/default/nfs File. For procedural information, see Setting Up NFS Services.


    Note –

    The use of the NFSMAPID_DOMAIN setting is not scalable and is not recommended for large deployments.


  2. If no value has been assigned to NFSMAPID_DOMAIN, then the daemon checks for a domain name from a DNS TXT RR. nfsmapid relies on directives in the /etc/resolv.conf file that are used by the set of routines in the resolver. The resolver searches through the configured DNS servers for the _nfsv4idmapdomain TXT RR. Note that the use of DNS TXT records is more scalable. For this reason, continued use of TXT records is much preferred over setting the keyword in the /etc/default/nfs file.

  3. If no DNS TXT record is configured to provide a domain name, then the nfsmapid daemon uses the value specified by the domain or search directive in the /etc/resolv.conf file, with the directive specified last taking precedence.

    In the following example, where both the domain and search directives are used, the nfsmapid daemon uses the first domain listed after the search directive, which is company.com.


    domain example.company.com
    search company.com foo.bar.com
  4. If the /etc/resolv.conf file does not exist, nfsmapid obtains the NFS version 4 domain name by following the behavior of the domainname command. Specifically, if the /etc/defaultdomain file exists, nfsmapid uses the contents of that file for the NFS version 4 domain. If the /etc/defaultdomain file does not exist, nfsmapid uses the domain name that is provided by the network's configured naming service. For more information, see the domainname(1M) man page.

nfsmapid and DNS TXT Records

The ubiquitous nature of DNS provides an efficient storage and distribution mechanism for the NFS version 4 domain name. Additionally, because of the inherent scalability of DNS, the use of DNS TXT resource records is the preferred method for configuring the NFS version 4 domain name for large deployments. You should configure the _nfsv4idmapdomain TXT record on enterprise-level DNS servers. Such configurations ensure that any NFS version 4 client or server can find its NFS version 4 domain by traversing the DNS tree.

The following is an example of a preferred entry for enabling the DNS server to provide the NFS version 4 domain name:


_nfsv4idmapdomain		IN		TXT			"foo.bar"

In this example, the domain name to configure is the value that is enclosed in double-quotes. Note that no ttl field is specified and that no domain is appended to _nfsv4idmapdomain, which is the value in the owner field. This configuration enables the TXT record to use the zone's ${ORIGIN} entry from the Start-Of-Authority (SOA) record. For example, at different levels of the domain namespace, the record could read as follows:


_nfsv4idmapdomain.subnet.yourcorp.com.    IN    TXT    "foo.bar"
_nfsv4idmapdomain.yourcorp.com.           IN    TXT    "foo.bar"

This configuration provides DNS clients with the added flexibility of using the resolv.conf file to search up the DNS tree hierarchy. See the resolv.conf(4) man page. This capability provides a higher probability of finding the TXT record. For even more flexibility, lower level DNS sub-domains can define their own DNS TXT resource records (RRs). This capability enables lower level DNS sub-domains to override the TXT record that is defined by the top level DNS domain.


Note –

The domain that is specified by the TXT record can be an arbitrary string that does not necessarily match the DNS domain for clients and servers that use NFS version 4. You have the option of not sharing NFS version 4 data with other DNS domains.


Checking for the NFS Version 4 Domain

Before assigning a value for your network's NFS version 4 domain, check to see if an NFS version 4 domain has already been configured for your network. The following examples provide ways of identifying your network's NFS version 4 domain.

For more information, see the following man pages:

Configuring the NFS Version 4 Default Domain

This section describes how the network obtains the desired default domain:

Configuring an NFS Version 4 Default Domain in the Solaris Express 5/06 Release

In the initial Solaris 10 release, the domain was defined during the first system reboot after installing the OS. In the Solaris Express 5/06 release, the NFS version 4 domain is defined during the installation of the OS. To provide this functionality, the following features have been added:

    The following describes how the functionality operates:

  1. The sysidnfs4 program checks the /etc/.sysIDtool.state file to determine whether an NFS version 4 domain has been identified.

    • If the .sysIDtool.state file shows that an NFS version 4 domain has been configured for the network, the sysidnfs4 program makes no further checks. See the following example of a .sysIDtool.state file:


      1       # System previously configured?
      1       # Bootparams succeeded?
      1       # System is on a network?
      1       # Extended network information gathered?
      1       # Autobinder succeeded?
      1       # Network has subnets?
      1       # root password prompted for?
      1       # locale and term prompted for?
      1       # security policy in place
      1       # NFSv4 domain configured
      xterms

      The 1 that appears before # NFSv4 domain configured confirms that the NFS version 4 domain has been configured.

    • If the .sysIDtool.state file shows that no NFS version 4 domain has been configured for the network, the sysidnfs4 program must make further checks. See the following example of a .sysIDtool.state file:


      1       # System previously configured?
      1       # Bootparams succeeded?
      1       # System is on a network?
      1       # Extended network information gathered?
      1       # Autobinder succeeded?
      1       # Network has subnets?
      1       # root password prompted for?
      1       # locale and term prompted for?
      1       # security policy in place
      0       # NFSv4 domain configured
      xterms

      The 0 that appears before # NFSv4 domain configured confirms that no NFS version 4 domain has been configured.

  2. If no NFS version 4 domain has been identified, the sysidnfs4 program checks the nfs4_domain keyword in the sysidcfg file.

    • If a value for nfs4_domain exists, that value is assigned to the NFSMAPID_DOMAIN keyword in the /etc/default/nfs file. Note that any value assigned to NFSMAPID_DOMAIN overrides the dynamic domain selection capability of the nfsmapid daemon. For more information about the dynamic domain selection capability of nfsmapid, see Precedence Rules.

    • If no value for nfs4_domain exists, the sysidnfs4 program identifies the domain that nfsmapid derives from the operating system's configured name services. This derived value is presented as a default domain at an interactive prompt that gives you the option of accepting the default value or assigning a different NFS version 4 domain.

This functionality makes the following obsolete:


Note –

Because of the inherent ubiquitous and scalable nature of DNS, the use of DNS TXT records for configuring the domain of large NFS version 4 deployments continues to be preferred and strongly encouraged. See nfsmapid and DNS TXT Records.


For specific information about the Solaris installation process, see the following:

Configuring an NFS Version 4 Default Domain in the Solaris 10 Release

In the initial Solaris 10 release of NFS version 4, if your network includes multiple DNS domains, but only has a single UID and GID namespace, all clients must use one value for NFSMAPID_DOMAIN. For sites that use DNS, nfsmapid resolves this issue by obtaining the domain name from the value that you assigned to _nfsv4idmapdomain. For more information, see nfsmapid and DNS TXT Records. If your network is not configured to use DNS, during the first system boot the Solaris OS uses the sysidconfig(1M) utility to provide the following prompts for an NFS version 4 domain name:


This system is configured with NFS version 4, which uses a 
domain name that is automatically derived from the system's 
name services. The derived domain name is sufficient for most 
configurations. In a few cases, mounts that cross different 
domains might cause files to be owned by nobody due to the 
lack of a common domain name.

Do you need to override the system's default NFS verion 4 domain 
name (yes/no)? [no]

The default response is [no]. If you choose [no], you see the following:


For more information about how the NFS version 4 default domain name is 
derived and its impact, refer to the man pages for nfsmapid(1M) and 
nfs(4), and the System Administration Guide: Network Services.

If you choose [yes], you see this prompt:


Enter the domain to be used as the NFS version 4 domain name.
NFS version 4 domain name []:

Note –

If a value for NFSMAPID_DOMAIN exists in /etc/default/nfs, the [domain_name] that you provide overrides that value.


Additional Information About nfsmapid

For more information about nfsmapid, see the following:

statd Daemon

This daemon works with lockd to provide crash and recovery functions for the lock manager. The statd daemon tracks the clients that hold locks on an NFS server. If a server crashes, on rebooting statd on the server contacts statd on the client. The client statd can then attempt to reclaim any locks on the server. The client statd also informs the server statd when a client has crashed so that the client's locks on the server can be cleared. You have no options to select with this daemon. For more information, see the statd(1M) man page.

In the Solaris 7 release, the way that statd tracks the clients has been improved. In all earlier Solaris releases, statd created files in /var/statmon/sm for each client by using the client's unqualified host name. This file naming caused problems if you had two clients in different domains that shared a host name, or if clients were not resident in the same domain as the NFS server. Because the unqualified host name only lists the host name, without any domain or IP-address information, the older version of statd had no way to differentiate between these types of clients. To fix this problem, the Solaris 7 statd creates a symbolic link in /var/statmon/sm to the unqualified host name by using the IP address of the client. The new link resembles the following:


# ls -l /var/statmon/sm
lrwxrwxrwx   1 daemon          11 Apr 29 16:32 ipv4.192.168.255.255 -> myhost
lrwxrwxrwx   1 daemon          11 Apr 29 16:32 ipv6.fec0::56:a00:20ff:feb9:2734 -> v6host
--w-------   1 daemon          11 Apr 29 16:32 myhost
--w-------   1 daemon          11 Apr 29 16:32 v6host

In this example, the client host name is myhost and the client's IP address is 192.168.255.255. If another host with the name myhost were mounting a file system, two symbolic links would lead to the host name.


Note –

NFS version 4 does not use this daemon.


NFS Commands

These commands must be run as root to be fully effective, but requests for information can be made by all users:

automount Command

This command installs autofs mount points and associates the information in the automaster files with each mount point. The syntax of the command is as follows:

automount [ -t duration ] [ -v ]

-t duration sets the time, in seconds, that a file system is to remain mounted, and -v selects the verbose mode. Running this command in the verbose mode allows for easier troubleshooting.

If not specifically set, the value for duration is set to 5 minutes. In most circumstances, this value is good. However, on systems that have many automounted file systems, you might need to increase the duration value. In particular, if a server has many users active, checking the automounted file systems every 5 minutes can be inefficient. Checking the autofs file systems every 1800 seconds, which is 30 minutes, could be more optimal. By not unmounting the file systems every 5 minutes, /etc/mnttab can become large. To reduce the output when df checks each entry in /etc/mnttab, you can filter the output from df by using the -F option (see the df(1M) man page) or by using egrep.

You should consider that adjusting the duration also changes how quickly changes to the automounter maps are reflected. Changes cannot be seen until the file system is unmounted. Refer to Modifying the Maps for instructions on how to modify automounter maps.

clear_locks Command

This command enables you to remove all file, record, and share locks for an NFS client. You must be root to run this command. From an NFS server, you can clear the locks for a specific client. From an NFS client, you can clear locks for that client on a specific server. The following example would clear the locks for the NFS client that is named tulip on the current system.


# clear_locks tulip

Using the -s option enables you to specify which NFS host to clear the locks from. You must run this option from the NFS client, which created the locks. In this situation, the locks from the client would be removed from the NFS server that is named bee.


# clear_locks -s bee

Caution – Caution –

This command should only be run when a client crashes and cannot clear its locks. To avoid data corruption problems, do not clear locks for an active client.


fsstat Command

Starting in the Solaris 10 11/06 release, the fsstat utility enables you to monitor file system operations by file system type and by mount point. Various options allow you to customize the output. See the following examples.

This example shows output for NFS version 3, version 4, and the root mount point.


% fsstat nfs3 nfs4 /
  new     name   name    attr    attr   lookup   rddir   read   read   write   write
 file    remov   chng     get     set      ops     ops    ops  bytes     ops   bytes
3.81K       90  3.65K   5.89M   11.9K    35.5M   26.6K   109K   118M   35.0K   8.16G  nfs3
  759      503    457   93.6K   1.44K     454K   8.82K  65.4K   827M     292    223K  nfs4
25.2K    18.1K  1.12K   54.7M    1017     259M   1.76M  22.4M  20.1G   1.43M   3.77G  /

This example uses the -i option to provide statistics about the I/O operations for NFS version 3, version 4, and the root mount point.


% fsstat -i nfs3 nfs4 /
 read    read    write   write   rddir   rddir   rwlock   rwulock
  ops   bytes      ops   bytes     ops   bytes      ops       ops
 109K    118M    35.0K   8.16G   26.6K   4.45M     170K      170K  nfs3
65.4K    827M      292    223K   8.82K   2.62M    74.1K     74.1K  nfs4
22.4M   20.1G    1.43M   3.77G   1.76M   3.29G    25.5M     25.5M  /

This example uses the -n option to provide statistics about the naming operations for NFS version 3, version 4, and the root mount point.


% fsstat -n nfs3 nfs4 /
lookup   creat   remov  link   renam  mkdir  rmdir   rddir  symlnk  rdlnk
 35.5M   3.79K      90     2   3.64K      5      0   26.6K      11   136K  nfs3
  454K     403     503     0     101      0      0   8.82K     356  1.20K  nfs4
  259M   25.2K   18.1K   114    1017     10      2   1.76M      12  8.23M  /

For more information, see the fsstat(1M) man page.

mount Command

With this command, you can attach a named file system, either local or remote, to a specified mount point. For more information, see the mount(1M) man page. Used without arguments, mount displays a list of file systems that are currently mounted on your computer.

Many types of file systems are included in the standard Solaris installation. Each file-system type has a specific man page that lists the options to mount that are appropriate for that file-system type. The man page for NFS file systems is mount_nfs(1M). For UFS file systems, see mount_ufs(1M).

The Solaris 7 release includes the ability to select a path name to mount from an NFS server by using an NFS URL instead of the standard server:/pathname syntax. See How to Mount an NFS File System Using an NFS URL for further information.


Caution – Caution –

The version of the mount command that is included in any Solaris release from 2.6 to the current release does not warn about invalid options. The command silently ignores any options that cannot be interpreted. Ensure that you verify all of the options that were used so that you can prevent unexpected behavior.


mount Options for NFS File Systems

The subsequent text lists some of the options that can follow the -o flag when you are mounting an NFS file system. For a complete list of options, refer to the mount_nfs(1M) man page.

bg|fg

These options can be used to select the retry behavior if a mount fails. The bg option causes the mount attempts to be run in the background. The fg option causes the mount attempt to be run in the foreground. The default is fg, which is the best selection for file systems that must be available. This option prevents further processing until the mount is complete. bg is a good selection for noncritical file systems because the client can do other processing while waiting for the mount request to be completed.

forcedirectio

This option improves performance of large sequential data transfers. Data is copied directly to a user buffer. No caching is performed in the kernel on the client. This option is off by default.

Previously, all write requests were serialized by both the NFS client and the NFS server. The NFS client has been modified to permit an application to issue concurrent writes, as well as concurrent reads and writes, to a single file. You can enable this functionality on the client by using the forcedirectio mount option. When you use this option, you are enabling this functionality for all files within the mounted file system. You could also enable this functionality on a single file on the client by using the directio() interface. Unless this functionality has been enabled, writes to files are serialized. Also, if concurrent writes or concurrent reads and writes are occurring, then POSIX semantics are no longer being supported for that file.

For an example of how to use this option, refer to Using the mount Command.

largefiles

With this option, you can access files that are larger than 2 Gbytes on a server that is running the Solaris 2.6 release. Whether a large file can be accessed can only be controlled on the server, so this option is silently ignored on NFS version 3 mounts. Starting with release 2.6, by default, all UFS file systems are mounted with largefiles. For mounts that use the NFS version 2 protocol, the largefiles option causes the mount to fail with an error.

nolargefiles

This option for UFS mounts guarantees that no large files can exist on the file system. See the mount_ufs(1M) man page. Because the existence of large files can only be controlled on the NFS server, no option for nolargefiles exists when using NFS mounts. Attempts to NFS-mount a file system by using this option are rejected with an error.

nosuid|suid

Starting in the Solaris 10 release, the nosuid option is the equivalent of specifying the nodevices option with the nosetuid option. When the nodevices option is specified, the opening of device-special files on the mounted file system is disallowed. When the nosetuid option is specified, the setuid bit and setgid bit in binary files that are located in the file system are ignored. The processes run with the privileges of the user who executes the binary file.

The suid option is the equivalent of specifying the devices option with the setuid option. When the devices option is specified, the opening of device-special files on the mounted file system is allowed. When the setuid option is specified, the setuid bit and the setgid bit in binary files that are located in the file system are honored by the kernel.

If neither option is specified, the default option is suid, which provides the default behavior of specifying the devices option with the setuid option.

The following table describes the effect of combining nosuid or suid with devices or nodevices, and setuid or nosetuid. Note that in each combination of options, the most restrictive option determines the behavior.

Behavior From the Combined Options 

Option 

Option 

Option 

The equivalent of nosetuid with nodevices

nosuid

nosetuid

nodevices

The equivalent of nosetuid with nodevices

nosuid

nosetuid

devices

The equivalent of nosetuid with nodevices

nosuid

setuid

nodevices

The equivalent of nosetuid with nodevices

nosuid

setuid

devices

The equivalent of nosetuid with nodevices

suid

nosetuid

nodevices

The equivalent of nosetuid with devices

suid

nosetuid

devices

The equivalent of setuid with nodevices

suid

setuid

nodevices

The equivalent of setuid with devices

suid

setuid

devices

The nosuid option provides additional security for NFS clients that access potentially untrusted servers. The mounting of remote file systems with this option reduces the chance of privilege escalation through importing untrusted devices or importing untrusted setuid binary files. All these options are available in all Solaris file systems.

public

This option forces the use of the public file handle when contacting the NFS server. If the public file handle is supported by the server, the mounting operation is faster because the MOUNT protocol is not used. Also, because the MOUNT protocol is not used, the public option allows mounting to occur through a firewall.

rw|ro

The -rw and -ro options indicate whether a file system is to be mounted read-write or read-only. The default is read-write, which is the appropriate option for remote home directories, mail-spooling directories, or other file systems that need to be changed by users. The read-only option is appropriate for directories that should not be changed by users. For example, shared copies of the man pages should not be writable by users.

sec=mode

You can use this option to specify the authentication mechanism to be used during the mount transaction. The value for mode can be one of the following.

  • Use krb5 for Kerberos version 5 authentication service.

  • Use krb5i for Kerberos version 5 with integrity.

  • Use krb5p for Kerberos version 5 with privacy.

  • Use none for no authentication.

  • Use dh for Diffie-Hellman (DH) authentication.

  • Use sys for standard UNIX authentication.

The modes are also defined in /etc/nfssec.conf.

soft|hard

An NFS file system that is mounted with the soft option returns an error if the server does not respond. The hard option causes the mount to continue to retry until the server responds. The default is hard, which should be used for most file systems. Applications frequently do not check return values from soft-mounted file systems, which can make the application fail or can lead to corrupted files. If the application does check the return values, routing problems and other conditions can still confuse the application or lead to file corruption if the soft option is used. In most situations, the soft option should not be used. If a file system is mounted by using the hard option and becomes unavailable, an application that uses this file system hangs until the file system becomes available.

Using the mount Command

Refer to the following examples.

umount Command

This command enables you to remove a remote file system that is currently mounted. The umount command supports the -V option to allow for testing. You might also use the -a option to unmount several file systems at one time. If mount-points are included with the -a option, those file systems are unmounted. If no mount points are included, an attempt is made to unmount all file systems that are listed in /etc/mnttab except for the “required” file systems, such as /, /usr, /var, /proc, /dev/fd, and /tmp. Because the file system is already mounted and should have an entry in /etc/mnttab, you do not need to include a flag for the file-system type.

The -f option forces a busy file system to be unmounted. You can use this option to unhang a client that is hung while trying to mount an unmountable file system.


Caution – Caution –

By forcing an unmount of a file system, you can cause data loss if files are being written to.


See the following examples.


Example 6–1 Unmounting a File System

This example unmounts a file system that is mounted on /usr/man:


# umount /usr/man


Example 6–2 Using Options with umount

This example displays the results of running umount -a -V:


# umount -a -V
umount /home/kathys
umount /opt
umount /home
umount /net

Notice that this command does not actually unmount the file systems.


mountall Command

Use this command to mount all file systems or a specific group of file systems that are listed in a file-system table. The command provides a way of doing the following:

Because all file systems that are labeled as NFS file-system type are remote file systems, some of these options are redundant. For more information, see the mountall(1M) man page.

Note that the following two examples of user input are equivalent:


# mountall -F nfs

# mountall -F nfs -r

umountall Command

Use this command to unmount a group of file systems. The -k option runs the fuser -k mount-point command to kill any processes that are associated with the mount-point. The -s option indicates that unmount is not to be performed in parallel. -l specifies that only local file systems are to be used, and -r specifies that only remote file systems are to be used. The -h host option indicates that all file systems from the named host should be unmounted. You cannot combine the -h option with -l or -r.

The following is an example of unmounting all file systems that are mounted from remote hosts:


# umountall -r

The following is an example of unmounting all file systems that are currently mounted from the server bee:


# umountall -h bee

share Command

With this command, you can make a local file system on an NFS server available for mounting. You can also use the share command to display a list of the file systems on your system that are currently shared. The NFS server must be running for the share command to work. The NFS server software is started automatically during boot if an entry is in /etc/dfs/dfstab. The command does not report an error if the NFS server software is not running, so you must verify that the software is running.

The objects that can be shared include any directory tree. However, each file system hierarchy is limited by the disk slice or partition that the file system is located on. For instance, sharing the root (/) file system would not also share /usr, unless these directories are on the same disk partition or slice. Normal installation places root on slice 0 and /usr on slice 6. Also, sharing /usr would not share any other local disk partitions that are mounted on subdirectories of /usr.

A file system cannot be shared if that file system is part of a larger file system that is already being shared. For example, if /usr and /usr/local are on one disk slice, /usr can be shared or /usr/local can be shared. However, if both directories need to be shared with different share options, /usr/local must be moved to a separate disk slice.

You can gain access to a file system that is read-only shared through the file handle of a file system that is read-write shared. However, the two file systems have to be on the same disk slice. You can create a more secure situation. Place those file systems that need to be read-write on a separate partition or separate disk slice from the file systems that you need to share as read-only.


Note –

For information about how NFS version 4 functions when a file system is unshared and then reshared, refer to Unsharing and Resharing a File System in NFS Version 4.


Non-File-System-Specific share Options

Some of the options that you can include with the -o flag are as follows.

rw|ro

The pathname file system is shared read-write or read-only for all clients.

rw=accesslist

The file system is shared read-write for the clients that are listed only. All other requests are denied. Starting with the Solaris 2.6 release, the list of clients that are defined in accesslist has been expanded. See Setting Access Lists With the share Command for more information. You can use this option to override an -ro option.

NFS-Specific share Options

The options that you can use with NFS file systems include the following.

aclok

This option enables an NFS server that supports the NFS version 2 protocol to be configured to do access control for NFS version 2 clients. Without this option, all clients are given minimal access. With this option, the clients have maximal access. For instance, on file systems that are shared with the -aclok option, if anyone has read permissions, everyone does. However, without this option, you can deny access to a client who should have access permissions. A decision to permit too much access or too little access depends on the security systems already in place. See Using Access Control Lists to Protect UFS Files in System Administration Guide: Security Services for more information about access control lists (ACLs).


Note –

To use ACLs, ensure that clients and servers run software that supports the NFS version 3 and NFS_ACL protocols. If the software only supports the NFS version 3 protocol, clients obtain correct access but cannot manipulate the ACLs. If the software supports the NFS_ACL protocol, the clients obtain correct access and can manipulate the ACLs. Starting with the Solaris 2.5 release, the Solaris system supports both protocols.


anon=uid

You use uid to select the user ID of unauthenticated users. If you set uid to -1, the server denies access to unauthenticated users. You can grant root access by setting anon=0, but this option allows unauthenticated users to have root access, so use the root option instead.

index=filename

When a user accesses an NFS URL, the -index=filename option forces the HTML file to load, instead of displaying a list of the directory. This option mimics the action of current browsers if an index.html file is found in the directory that the HTTP URL is accessing. This option is the equivalent of setting the DirectoryIndex option for httpd. For instance, suppose that the dfstab file entry resembles the following:


share -F nfs -o ro,public,index=index.html /export/web

These URLs then display the same information:


nfs://<server>/<dir>
nfs://<server>/<dir>/index.html
nfs://<server>//export/web/<dir>
nfs://<server>//export/web/<dir>/index.html
http://<server>/<dir>
http://<server>/<dir>/index.html
log=tag

This option specifies the tag in /etc/nfs/nfslog.conf that contains the NFS server logging configuration information for a file system. This option must be selected to enable NFS server logging.

nosuid

This option signals that all attempts to enable the setuid or setgid mode should be ignored. NFS clients cannot create files with the setuid or setgid bits on.

public

The -public option has been added to the share command to enable WebNFS browsing. Only one file system on a server can be shared with this option.

root=accesslist

The server gives root access to the hosts in the list. By default, the server does not give root access to any remote hosts. If the selected security mode is anything other than -sec=sys, you can only include client host names in the accesslist. Starting with the Solaris 2.6 release, the list of clients that are defined in accesslist is expanded. See Setting Access Lists With the share Command for more information.


Caution – Caution –

Granting root access to other hosts has wide security implications. Use the -root= option with extreme caution.


root=client-name

The client-name value is used with AUTH_SYS authentication to check the client's IP address against a list of addresses provided by exportfs(1B). If a match is found, root access is given to the file systems being shared.

root=host-name

For secure NFS modes, such as AUTH_SYS or RPCSEC_GSS, the server checks the clients' principal names against a list of host-based principal names that are derived from an access list. The generic syntax for the client's principal name is root@hostname. For Kerberos V the syntax is root/hostname.fully.qualified@REALM. When you use the host-name value, the clients on the access list must have the credentials for a principal name. For Kerberos V, the client must have a valid keytab entry for its root/hostname.fully.qualified@REALM principal name. For more information, see Configuring Kerberos Clients in System Administration Guide: Security Services.

sec=mode[:mode]

mode selects the security modes that are needed to obtain access to the file system. By default, the security mode is UNIX authentication. You can specify multiple modes, but use each security mode only once per command line. Each -mode option applies to any subsequent -rw, -ro, -rw=, -ro=, -root=, and -window= options until another -mode is encountered. The use of -sec=none maps all users to user nobody.

window=value

value selects the maximum lifetime in seconds of a credential on the NFS server. The default value is 30000 seconds or 8.3 hours.

Setting Access Lists With the share Command

In Solaris releases prior to 2.6, the accesslist that was included with either the -ro=, -rw=, or -root= option of the share command was restricted to a list of host names or netgroup names. Starting with the Solaris 2.6 release, the access list can also include a domain name, a subnet number, or an entry to deny access. These extensions should simplify file access control on a single server without having to change the namespace or maintain long lists of clients.

This command provides read-only access for most systems but allows read-write access for rose and lilac:


# share -F nfs -o ro,rw=rose:lilac /usr/src

In the next example, read-only access is assigned to any host in the eng netgroup. The client rose is specifically given read-write access.


# share -F nfs -o ro=eng,rw=rose /usr/src

Note –

You cannot specify both rw and ro without arguments. If no read-write option is specified, the default is read-write for all clients.


To share one file system with multiple clients, you must type all options on the same line. Multiple invocations of the share command on the same object “remember” only the last command that is run. This command enables read-write access to three client systems, but only rose and tulip are given access to the file system as root.


# share -F nfs -o rw=rose:lilac:tulip,root=rose:tulip /usr/src

When sharing a file system that uses multiple authentication mechanisms, ensure that you include the -ro, -ro=, -rw, -rw=, -root, and -window options after the correct security modes. In this example, UNIX authentication is selected for all hosts in the netgroup that is named eng. These hosts can only mount the file system in read-only mode. The hosts tulip and lilac can mount the file system read-write if these hosts use Diffie-Hellman authentication. With these options, tulip and lilac can mount the file system read-only even if these hosts are not using DH authentication. However, the host names must be listed in the eng netgroup.


# share -F nfs -o sec=dh,rw=tulip:lilac,sec=sys,ro=eng /usr/src

Even though UNIX authentication is the default security mode, UNIX authentication is not included if the -sec option is used. Therefore, you must include a -sec=sys option if UNIX authentication is to be used with any other authentication mechanism.

You can use a DNS domain name in the access list by preceding the actual domain name with a dot. The string that follows the dot is a domain name, not a fully qualified host name. The following entry allows mount access to all hosts in the eng.example.com domain:


# share -F nfs -o ro=.:.eng.example.com /export/share/man

In this example, the single “.” matches all hosts that are matched through the NIS or NIS+ namespaces. The results that are returned from these name services do not include the domain name. The “.eng.example.com” entry matches all hosts that use DNS for namespace resolution. DNS always returns a fully qualified host name. So, the longer entry is required if you use a combination of DNS and the other namespaces.

You can use a subnet number in an access list by preceding the actual network number or the network name with “@”. This character differentiates the network name from a netgroup or a fully qualified host name. You must identify the subnet in either /etc/networks or in an NIS or NIS+ namespace. The following entries have the same effect if the 192.168 subnet has been identified as the eng network:


# share -F nfs -o ro=@eng /export/share/man
# share -F nfs -o ro=@192.168 /export/share/man
# share -F nfs -o ro=@192.168.0.0 /export/share/man

The last two entries show that you do not need to include the full network address.

If the network prefix is not byte aligned, as with Classless Inter-Domain Routing (CIDR), the mask length can be explicitly specified on the command line. The mask length is defined by following either the network name or the network number with a slash and the number of significant bits in the prefix of the address. For example:


# share -f nfs -o ro=@eng/17 /export/share/man
# share -F nfs -o ro=@192.168.0/17 /export/share/man

In these examples, the “/17” indicates that the first 17 bits in the address are to be used as the mask. For additional information about CIDR, look up RFC 1519.

You can also select negative access by placing a “-” before the entry. Note that the entries are read from left to right. Therefore, you must place the negative access entries before the entry that the negative access entries apply to:


# share -F nfs -o ro=-rose:.eng.example.com /export/share/man

This example would allow access to any hosts in the eng.example.com domain except the host that is named rose.

unshare Command

This command allows you to make a previously available file system unavailable for mounting by clients. You can use the unshare command to unshare any file system, whether the file system was shared explicitly with the share command or automatically through /etc/dfs/dfstab. If you use the unshare command to unshare a file system that you shared through the dfstab file, be careful. Remember that the file system is shared again when you exit and reenter run level 3. You must remove the entry for this file system from the dfstab file if the change is to continue.

When you unshare an NFS file system, access from clients with existing mounts is inhibited. The file system might still be mounted on the client, but the files are not accessible.


Note –

For information about how NFS version 4 functions when a file system is unshared and then reshared, refer to Unsharing and Resharing a File System in NFS Version 4.


The following is an example of unsharing a specific file system:


# unshare /usr/src

shareall Command

This command allows for multiple file systems to be shared. When used with no options, the command shares all entries in /etc/dfs/dfstab. You can include a file name to specify the name of a file that lists share command lines. If you do not include a file name, /etc/dfs/dfstab is checked. If you use a “-” to replace the file name, you can type share commands from standard input.

The following is an example of sharing all file systems that are listed in a local file:


# shareall /etc/dfs/special_dfstab

unshareall Command

This command makes all currently shared resources unavailable. The -F FSType option selects a list of file-system types that are defined in /etc/dfs/fstypes. This flag enables you to choose only certain types of file systems to be unshared. The default file-system type is defined in /etc/dfs/fstypes. To choose specific file systems, use the unshare command.

The following is an example of unsharing all NFS-type file systems:


# unshareall -F nfs

showmount Command

This command displays one of the following:


Note –

The showmount command only shows NFS version 2 and version 3 exports. This command does not show NFS version 4 exports.


The command syntax is as follows:

showmount [ -ade ] [ hostname ]

-a

Prints a list of all the remote mounts. Each entry includes the client name and the directory.

-d

Prints a list of the directories that are remotely mounted by clients.

-e

Prints a list of the files that are shared or are exported.

hostname

Selects the NFS server to gather the information from.

If hostname is not specified, the local host is queried.

The following command lists all clients and the local directories that the clients have mounted:


# showmount -a bee
lilac:/export/share/man
lilac:/usr/src
rose:/usr/src
tulip:/export/share/man

The following command lists the directories that have been mounted:


# showmount -d bee
/export/share/man
/usr/src

The following command lists file systems that have been shared:


# showmount -e bee
/usr/src								(everyone)
/export/share/man					eng

setmnt Command

This command creates an /etc/mnttab table. The mount and umount commands consult the table. Generally, you do not have to run this command manually, as this command runs automatically when a system is booted.

Commands for Troubleshooting NFS Problems

These commands can be useful when troubleshooting NFS problems.

nfsstat Command

You can use this command to gather statistical information about NFS and RPC connections. The syntax of the command is as follows:

nfsstat [ -cmnrsz ]

-c

Displays client-side information

-m

Displays statistics for each NFS-mounted file system

-n

Specifies that NFS information is to be displayed on both the client side and the server side

-r

Displays RPC statistics

-s

Displays the server-side information

-z

Specifies that the statistics should be set to zero

If no options are supplied on the command line, the -cnrs options are used.

Gathering server-side statistics can be important for debugging problems when new software or new hardware is added to the computing environment. Running this command a minimum of once a week, and storing the numbers, provides a good history of previous performance.

Refer to the following example:


# nfsstat -s

Server rpc:
Connection oriented:
calls      badcalls   nullrecv   badlen     xdrcall    dupchecks  dupreqs    
719949194  0          0          0          0          58478624   33         
Connectionless:
calls      badcalls   nullrecv   badlen     xdrcall    dupchecks  dupreqs    
73753609   0          0          0          0          987278     7254       

Server nfs:
calls                badcalls             
787783794            3516                 
Version 2: (746607 calls)
null       getattr    setattr    root       lookup     readlink   read       
883 0%     60 0%      45 0%      0 0%       177446 23% 1489 0%    537366 71% 
wrcache    write      create     remove     rename     link       symlink    
0 0%       1105 0%    47 0%      59 0%      28 0%      10 0%      9 0%       
mkdir      rmdir      readdir    statfs     
26 0%      0 0%       27926 3%   108 0%     
Version 3: (728863853 calls)
null          getattr       setattr       lookup        access        
1365467 0%    496667075 68% 8864191 1%    66510206 9%   19131659 2%   
readlink      read          write         create        mkdir         
414705 0%     80123469 10%  18740690 2%   4135195 0%    327059 0%     
symlink       mknod         remove        rmdir         rename        
101415 0%     9605 0%       6533288 0%    111810 0%     366267 0%     
link          readdir       readdirplus   fsstat        fsinfo        
2572965 0%    519346 0%     2726631 0%    13320640 1%   60161 0%      
pathconf      commit        
13181 0%      6248828 0%    
Version 4: (54871870 calls)
null                compound            
266963 0%           54604907 99%        
Version 4: (167573814 operations)
reserved            access              close               commit              
0 0%                2663957 1%          2692328 1%          1166001 0%          
create              delegpurge          delegreturn         getattr             
167423 0%           0 0%                1802019 1%          26405254 15%        
getfh               link                lock                lockt               
11534581 6%         113212 0%           207723 0%           265 0%              
locku               lookup              lookupp             nverify             
230430 0%           11059722 6%         423514 0%           21386866 12%        
open                openattr            open_confirm        open_downgrade      
2835459 1%          4138 0%             18959 0%            3106 0%             
putfh               putpubfh            putrootfh           read                
52606920 31%        0 0%                35776 0%            4325432 2%          
readdir             readlink            remove              rename              
606651 0%           38043 0%            560797 0%           248990 0%           
renew               restorefh           savefh              secinfo             
2330092 1%          8711358 5%          11639329 6%         19384 0%            
setattr             setclientid         setclientid_confirm verify              
453126 0%           16349 0%            16356 0%            2484 0%             
write               release_lockowner   illegal             
3247770 1%          0 0%                0 0%                

Server nfs_acl:
Version 2: (694979 calls)
null        getacl      setacl      getattr     access      getxattrdir 
0 0%        42358 6%    0 0%        584553 84%  68068 9%    0 0%        
Version 3: (2465011 calls)
null        getacl      setacl      getxattrdir 
0 0%        1293312 52% 1131 0%     1170568 47% 

The previous listing is an example of NFS server statistics. The first five lines relate to RPC and the remaining lines report NFS activities. In both sets of statistics, knowing the average number of badcalls or calls and the number of calls per week can help identify a problem. The badcalls value reports the number of bad messages from a client. This value can indicate network hardware problems.

Some of the connections generate write activity on the disks. A sudden increase in these statistics could indicate trouble and should be investigated. For NFS version 2 statistics, the connections to note are setattr, write, create, remove, rename, link, symlink, mkdir, and rmdir. For NFS version 3 and version 4 statistics, the value to watch is commit. If the commit level is high in one NFS server, compared to another almost identical server, check that the NFS clients have enough memory. The number of commit operations on the server grows when clients do not have available resources.

pstack Command

This command displays a stack trace for each process. The pstack command must be run by the owner of the process or by root. You can use pstack to determine where a process is hung. The only option that is allowed with this command is the PID of the process that you want to check. See the proc(1) man page.

The following example is checking the nfsd process that is running.


# /usr/bin/pgrep nfsd
243
# /usr/bin/pstack 243
243:    /usr/lib/nfs/nfsd -a 16
 ef675c04 poll     (24d50, 2, ffffffff)
 000115dc ???????? (24000, 132c4, 276d8, 1329c, 276d8, 0)
 00011390 main     (3, efffff14, 0, 0, ffffffff, 400) + 3c8
 00010fb0 _start   (0, 0, 0, 0, 0, 0) + 5c

The example shows that the process is waiting for a new connection request, which is a normal response. If the stack shows that the process is still in poll after a request is made, the process might be hung. Follow the instructions in How to Restart NFS Services to fix this problem. Review the instructions in NFS Troubleshooting Procedures to fully verify that your problem is a hung program.

rpcinfo Command

This command generates information about the RPC service that is running on a system. You can also use this command to change the RPC service. Many options are available with this command. See the rpcinfo(1M) man page. The following is a shortened synopsis for some of the options that you can use with the command.

rpcinfo [ -m | -s ] [ hostname ]

rpcinfo -T transport hostname [ progname ]

rpcinfo [ -t | -u ] [ hostname ] [ progname ]

-m

Displays a table of statistics of the rpcbind operations

-s

Displays a concise list of all registered RPC programs

-T

Displays information about services that use specific transports or protocols

-t

Probes the RPC programs that use TCP

-u

Probes the RPC programs that use UDP

transport

Selects the transport or protocol for the services

hostname

Selects the host name of the server that you need information from

progname

Selects the RPC program to gather information about

If no value is given for hostname, the local host name is used. You can substitute the RPC program number for progname, but many users can remember the name and not the number. You can use the -p option in place of the -s option on those systems that do not run the NFS version 3 software.

The data that is generated by this command can include the following:

The following example gathers information about the RPC services that are running on a server. The text that is generated by the command is filtered by the sort command to make the output more readable. Several lines that list RPC services have been deleted from the example.


% rpcinfo -s bee |sort -n
   program version(s) netid(s)                         service     owner
    100000  2,3,4     udp6,tcp6,udp,tcp,ticlts,ticotsord,ticots rpcbind     superuser
    100001  4,3,2     ticlts,udp,udp6                  rstatd      superuser
    100002  3,2       ticots,ticotsord,tcp,tcp6,ticlts,udp,udp6 rusersd     superuser
    100003  3,2       tcp,udp,tcp6,udp6                nfs         superuser
    100005  3,2,1     ticots,ticotsord,tcp,tcp6,ticlts,udp,udp6 mountd      superuser
    100007  1,2,3     ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 ypbind      superuser
    100008  1         ticlts,udp,udp6                  walld       superuser
    100011  1         ticlts,udp,udp6                  rquotad     superuser
    100012  1         ticlts,udp,udp6                  sprayd      superuser
    100021  4,3,2,1   tcp,udp,tcp6,udp6                nlockmgr    superuser
    100024  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 status      superuser
    100029  3,2,1     ticots,ticotsord,ticlts          keyserv     superuser
    100068  5         tcp,udp                          cmsd        superuser
    100083  1         tcp,tcp6                         ttdbserverd superuser
    100099  3         ticotsord                        autofs      superuser
    100133  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
    100134  1         ticotsord                        tokenring   superuser
    100155  1         ticots,ticotsord,tcp,tcp6        smserverd   superuser
    100221  1         tcp,tcp6                         -           superuser
    100227  3,2       tcp,udp,tcp6,udp6                nfs_acl     superuser
    100229  1         tcp,tcp6                         metad       superuser
    100230  1         tcp,tcp6                         metamhd     superuser
    100231  1         ticots,ticotsord,ticlts          -           superuser
    100234  1         ticotsord                        gssd        superuser
    100235  1         tcp,tcp6                         -           superuser
    100242  1         tcp,tcp6                         metamedd    superuser
    100249  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
    300326  4         tcp,tcp6                         -           superuser
    300598  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
    390113  1         tcp                              -           unknown
 805306368  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
1289637086  1,5       tcp                              -           26069

The following two examples show how to gather information about a particular RPC service by selecting a particular transport on a server. The first example checks the mountd service that is running over TCP. The second example checks the NFS service that is running over UDP.


% rpcinfo -t bee mountd
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
% rpcinfo -u bee nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting

snoop Command

This command is often used to watch for packets on the network. The snoop command must be run as root. The use of this command is a good way to ensure that the network hardware is functioning on both the client and the server. Many options are available. See the snoop(1M) man page. A shortened synopsis of the command follows:

snoop [ -d device ] [ -o filename ] [ host hostname ]

-d device

Specifies the local network interface

-o filename

Stores all the captured packets into the named file

hostname

Displays packets going to and from a specific host only

The -d device option is useful on those servers that have multiple network interfaces. You can use many expressions other than setting the host. A combination of command expressions with grep can often generate data that is specific enough to be useful.

When troubleshooting, make sure that packets are going to and from the proper host. Also, look for error messages. Saving the packets to a file can simplify the review of the data.

truss Command

You can use this command to check if a process is hung. The truss command must be run by the owner of the process or by root. You can use many options with this command. See the truss(1) man page. A shortened syntax of the command follows.

truss [ -t syscall ] -p pid

-t syscall

Selects system calls to trace

-p pid

Indicates the PID of the process to be traced

The syscall can be a comma-separated list of system calls to be traced. Also, starting syscall with an ! selects to exclude the listed system calls from the trace.

This example shows that the process is waiting for another connection request from a new client.


# /usr/bin/truss -p 243
poll(0x00024D50, 2, -1)         (sleeping...)

The previous example shows a normal response. If the response does not change after a new connection request has been made, the process could be hung. Follow the instructions in How to Restart NFS Services to fix the hung program. Review the instructions in NFS Troubleshooting Procedures to fully verify that your problem is a hung program.

NFS Over RDMA

The Solaris 10 release includes the Remote Direct Memory Access (RDMA) protocol, which is a technology for memory-to-memory transfer of data over high-speed networks. Specifically, RDMA provides remote data transfer directly to and from memory without CPU intervention. RDMA also provides direct data placement, which eliminates data copies and, therefore, further eliminates CPU intervention. Thus, RDMA relieves not only the host CPU, but also reduces contention for the host memory and I/O buses. To provide this capability, RDMA combines the interconnect I/O technology of InfiniBand on SPARC platforms with the Solaris operating system. The following figure shows the relationship of RDMA to other protocols, such as UDP and TCP.

Figure 6–1 Relationship of RDMA to Other Protocols

The context describes the graphic.

If the RDMA transport is not available on both the client and the server, the TCP transport is the initial fallback, followed by UDP if TCP is unavailable. Note, however, that if you use the proto=rdma mount option, NFS mounts are forced to use RDMA only.

For more information about NFS mount options, see the mount_nfs(1M) man page and mount Command.


Note –

RDMA for InfiniBand uses the IP addressing format and the IP lookup infrastructure to specify peers. However, because RDMA is a separate protocol stack, it does not fully implement all IP semantics. For example, RDMA does not use IP addressing to communicate with peers. Therefore, RDMA might bypass configurations for various security policies that are based on IP addresses. However, the NFS and RPC administrative policies, such as mount restrictions and secure RPC, are not bypassed.


How the NFS Service Works

The following sections describe some of the complex functions of the NFS software. Note that some of the feature descriptions in this section are exclusive to NFS version 4.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones for more information.


Version Negotiation in NFS

The NFS initiation process includes negotiating the protocol levels for servers and clients. If you do not specify the version level, then the best level is selected by default. For example, if both the client and the server can support version 3, then version 3 is used. If the client or the server can only support version 2, then version 2 is used.

Starting in the Solaris 10 release, you can set the keywords NFS_CLIENT_VERSMIN, NFS_CLIENT_VERSMAX, NFS_SERVER_VERSMIN, NFS_SERVER_VERSMAX in the /etc/default/nfs file. Your specified minimum and maximum values for the server and the client would replace the default values for these keywords. For both the client and the server the default minimum value is 2 and the default maximum value is 4. See Keywords for the /etc/default/nfs File. To find the version supported by the server, the NFS client begins with the setting for NFS_CLIENT_VERSMAX and continues to try each version until reaching the version setting for NFS_CLIENT_VERSMIN. As soon as the supported version is found, the process terminates. For example, if NFS_CLIENT_VERSMAX=4 and NFS_CLIENT_VERSMIN=2, then the client attempts version 4 first, then version 3, and finally version 2. If NFS_CLIENT_VERSMIN and NFS_CLIENT_VERSMAX are set to the same value, then the client always uses this version and does not attempt any other version. If the server does not offer this version, the mount fails.


Note –

You can override the values that are determined by the negotiation by using the vers option with the mount command. See the mount_nfs(1M) man page.


For procedural information, refer to Setting Up NFS Services.

Features in NFS Version 4

Many changes have been made to NFS in version 4. This section provides descriptions of these new features.


Note –

Starting in the Solaris 10 release, NFS version 4 does not support the LIPKEY/SPKM security flavor. Also, NFS version 4 does not use the mountd, nfslogd, and statd daemons.


For procedural information related to using NFS version 4, refer to Setting Up NFS Services.

Unsharing and Resharing a File System in NFS Version 4

With both NFS version 3 and version 4, if a client attempts to access a file system that has been unshared, the server responds with an error code. However, with NFS version 3 the server maintains any locks that the clients had obtained before the file system was unshared. Thus, when the file system is reshared, NFS version 3 clients can access the file system as though that file system had never been unshared.

With NFS version 4, when a file system is unshared, all the state for any open files or file locks in that file system is destroyed. If the client attempts to access these files or locks, the client receives an error. This error is usually reported as an I/O error to the application. Note, however, that resharing a currently shared file system to change options does not destroy any of the state on the server.

For related information, refer to Client Recovery in NFS Version 4 or see the unshare_nfs(1M) man page.

File-System Namespace in NFS Version 4

NFS version 4 servers create and maintain a pseudo-file system, which provides clients with seamless access to all exported objects on the server. Prior to NFS version 4, the pseudo-file system did not exist. Clients were forced to mount each shared server file system for access. Consider the following example.

Figure 6–2 Views of the Server File System and the Client File System

The context describes the graphic.

Note that the client cannot see the payroll directory and the nfs4x directory, because these directories are not exported and do not lead to exported directories. However, the local directory is visible to the client, because local is an exported directory. The projects directory is visible to the client, because projects leads to the exported directory, nfs4. Thus, portions of the server namespace that are not explicitly exported are bridged with a pseudo-file system that views only the exported directories and those directories that lead to server exports.

A pseudo-file system is a structure that contains only directories and is created by the server. The pseudo-file system permits a client to browse the hierarchy of exported file systems. Thus, the client's view of the pseudo-file system is limited to paths that lead to exported file systems.

Previous versions of NFS did not permit a client to traverse server file systems without mounting each file system. However, in NFS version 4, the server namespace does the following:

For POSIX-related reasons, the Solaris NFS version 4 client does not cross server file-system boundaries. When such attempts are made, the client makes the directory appear to be empty. To remedy this situation, you must perform a mount for each of the server's file systems.

Volatile File Handles in NFS Version 4

File handles are created on the server and contain information that uniquely identifies files and directories. In NFS versions 2 and 3 the server returned persistent file handles. Thus, the client could guarantee that the server would generate a file handle that always referred to the same file. For example:

Thus, when the server received a request from a client that included a file handle, the resolution was straightforward and the file handle always referred to the correct file.

This method of identifying files and directories for NFS operations was fine for most UNIX-based servers. However, the method could not be implemented on servers that relied on other methods of identification, such as a file's path name. To resolve this problem, the NFS version 4 protocol permits a server to declare that its file handles are volatile. Thus, a file handle could change. If the file handle does change, the client must find the new file handle.

Like NFS versions 2 and 3, the Solaris NFS version 4 server always provides persistent file handles. However, Solaris NFS version 4 clients that access non-Solaris NFS version 4 servers must support volatile file handles if the server uses them. Specifically, when the server tells the client that the file handle is volatile, the client must cache the mapping between path name and file handle. The client uses the volatile file handle until it expires. On expiration, the client does the following:


Note –

The server always tells the client which file handles are persistent and which file handles are volatile.


Volatile file handles might expire for any of these reasons:

Note that if the client is unable to find the new file handle, an error message is put in the syslog file. Further attempts to access this file fail with an I/O error.

Client Recovery in NFS Version 4

The NFS version 4 protocol is a stateful protocol. A protocol is stateful when both the client and the server maintain current information about the following.

When a failure occurs, such as a server crash, the client and the server work together to reestablish the open and lock states that existed prior to the failure.

When a server crashes and is rebooted, the server loses its state. The client detects that the server has rebooted and begins the process of helping the server rebuild its state. This process is known as client recovery, because the client directs the process.

When the client discovers that the server has rebooted, the client immediately suspends its current activity and begins the process of client recovery. When the recovery process starts, a message, such as the following, is displayed in the system error log /var/adm/messages.


NOTICE: Starting recovery server basil.example.company.com

During the recovery process, the client sends the server information about the client's previous state. Note, however, that during this period the client does not send any new requests to the server. Any new requests to open files or set file locks must wait for the server to complete its recovery period before proceeding.

When the client recovery process is complete, the following message is displayed in the system error log /var/adm/messages.


NOTICE: Recovery done for server basil.example.company.com

Now the client has successfully completed sending its state information to the server. However, even though the client has completed this process, other clients might not have completed their process of sending state information to the server. Therefore, for a period of time, the server does not accept any open or lock requests. This period of time, which is known as the grace period, is designated to permit all the clients to complete their recovery.

During the grace period, if the client attempts to open any new files or establish any new locks, the server denies the request with the GRACE error code. On receiving this error, the client must wait for the grace period to end and then resend the request to the server. During the grace period the following message is displayed.


NFS server recovering

Note that during the grace period the commands that do not open files or set file locks can proceed. For example, the commands ls and cd do not open a file or set a file lock. Thus, these commands are not suspended. However, a command such as cat, which opens a file, would be suspended until the grace period ends.

When the grace period has ended, the following message is displayed.


NFS server recovery ok.

The client can now send new open and lock requests to the server.

Client recovery can fail for a variety of reasons. For example, if a network partition exists after the server reboots, the client might not be able to reestablish its state with the server before the grace period ends. When the grace period has ended, the server does not permit the client to reestablish its state because new state operations could create conflicts. For example, a new file lock might conflict with an old file lock that the client is trying to recover. When such situations occur, the server returns the NO_GRACE error code to the client.

If the recovery of an open operation for a particular file fails, the client marks the file as unusable and the following message is displayed.


WARNING: The following NFS file could not be recovered and was marked dead 
(can't reopen:  NFS status 70):  file :  filename

Note that the number 70 is only an example.

If reestablishing a file lock during recovery fails, the following error message is posted.


NOTICE: nfs4_send_siglost:  pid PROCESS-ID lost
lock on server SERVER-NAME

In this situation, the SIGLOST signal is posted to the process. The default action for the SIGLOST signal is to terminate the process.

For you to recover from this state, you must restart any applications that had files open at the time of the failure. Note that the following can occur.

Thus, some processes can access a particular file while other processes cannot.

OPEN Share Support in NFS Version 4

The NFS version 4 protocol provides several file-sharing modes that the client can use to control file access by other clients. A client can specify the following:

The Solaris NFS version 4 server fully implements these file-sharing modes. Therefore, if a client attempts to open a file in a way that conflicts with the current share mode, the server denies the attempt by failing the operation. When such attempts fail with the initiation of the open or create operations, the Solaris NFS version 4 client receives a protocol error. This error is mapped to the application error EACCES.

Even though the protocol provides several sharing modes, currently the open operation in Solaris does not offer multiple sharing modes. When opening a file, a Solaris NFS version 4 client can only use the DENY_NONE mode.

Also, even though the Solaris fcntl system call has an F_SHARE command to control file sharing, the fcntl commands cannot be implemented correctly with NFS version 4. If you use these fcntl commands on an NFS version 4 client, the client returns the EAGAIN error to the application.

Delegation in NFS Version 4

NFS version 4 provides both client support and server support for delegation. Delegation is a technique by which the server delegates the management of a file to a client. For example, the server could grant either a read delegation or a write delegation to a client. Read delegations can be granted to multiple clients at the same time, because these read delegations do not conflict with each other. A write delegation can be granted to only one client, because a write delegation conflicts with any file access by any other client. While holding a write delegation, the client would not send various operations to the server because the client is guaranteed exclusive access to a file. Similarly, the client would not send various operations to the server while holding a read delegation. The reason is that the server guarantees that no client can open the file in write mode. The effect of delegation is to greatly reduce the interactions between the server and the client for delegated files. Therefore, network traffic is reduced, and performance on the client and the server is improved. Note, however, that the degree of performance improvement depends on the kind of file interaction used by an application and the amount of network and server congestion.

The decision about whether to grant a delegation is made entirely by the server. A client does not request a delegation. The server makes decisions about whether to grant a delegation, based on the access patterns for the file. If a file has been recently accessed in write mode by several different clients, the server might not grant a delegation. The reason is that this access pattern indicates the potential for future conflicts.

A conflict occurs when a client accesses a file in a manner that is inconsistent with the delegations that are currently granted for that file. For example, if a client holds a write delegation on a file and a second client opens that file for read or write access, the server recalls the first client's write delegation. Similarly, if a client holds a read delegation and another client opens the same file for writing, the server recalls the read delegation. Note that in both situations, the second client is not granted a delegation because a conflict now exists. When a conflict occurs, the server uses a callback mechanism to contact the client that currently holds the delegation. On receiving this callback, the client sends the file's updated state to the server and returns the delegation. If the client fails to respond to the recall, the server revokes the delegation. In such instances, the server rejects all operations from the client for this file, and the client reports the requested operations as failures. Generally, these failures are reported to the application as I/O errors. To recover from these errors, the file must be closed and then reopened. Failures from revoked delegations can occur when a network partition exists between the client and the server while the client holds a delegation.

Note that one server does not resolve access conflicts for a file that is stored on another server. Thus, an NFS server only resolves conflicts for files that it stores. Furthermore, in response to conflicts that are caused by clients that are running various versions of NFS, an NFS server can only initiate recalls to the client that is running NFS version 4. An NFS server cannot initiate recalls for clients that are running earlier versions of NFS.

The process for detecting conflicts varies. For example, unlike NFS version 4, because version 2 and version 3 do not have an open procedure, the conflict is detected only after the client attempts to read, write, or lock a file. The server's response to these conflicts varies also. For example:

These conditions clear when the delegation conflict has been resolved.

By default, server delegation is enabled. You can disable delegation by modifying the /etc/default/nfs file. For procedural information, refer to How to Select Different Versions of NFS on a Server.

No keywords are required for client delegation. The NFS version 4 callback daemon, nfs4cbd, provides the callback service on the client. This daemon is started automatically whenever a mount for NFS version 4 is enabled. By default, the client provides the necessary callback information to the server for all Internet transports that are listed in the /etc/netconfig system file. Note that if the client is enabled for IPv6 and if the IPv6 address for the client's name can be determined, then the callback daemon accepts IPv6 connections.

The callback daemon uses a transient program number and a dynamically assigned port number. This information is provided to the server, and the server tests the callback path before granting any delegations. If the callback path does not test successfully, the server does not grant delegations, which is the only externally visible behavior.

Note that because callback information is embedded within an NFS version 4 request, the server is unable to contact the client through a device that uses Network Address Translation (NAT). Also, the callback daemon uses a dynamic port number. Therefore, the server might not be able to traverse a firewall, even if that firewall enables normal NFS traffic on port 2049. In such situations, the server does not grant delegations.

ACLs and nfsmapid in NFS Version 4

An access control list (ACL) provides better file security by enabling the owner of a file to define file permissions for the file owner, the group, and other specific users and groups. ACLs are set on the server and the client by using the setfacl command. See the setfacl(1) man page for more information. In NFS version 4, the ID mapper, nfsmapid, is used to map user or group IDs in ACL entries on a server to user or group IDs in ACL entries on a client. The reverse is also true. The user and group IDs in the ACL entries must exist on both the client and the server.

Reasons for ID Mapping to Fail

The following situations can cause ID mapping to fail:

Avoiding ID Mapping Problems With ACLs

To avoid ID mapping problems, do the following:

Checking for Unmapped User or Group IDs

To determine if any user or group cannot be mapped on the server or client, use the following script:


#! /usr/sbin/dtrace -Fs

sdt:::nfs4-acl-nobody
{
     printf("validate_idmapping: (%s) in the ACL could not be mapped!", 
stringof(arg0));
}

Note –

The probe name that is used in this script is an interface that could change in the future. For more information, see Stability Levels in Solaris Dynamic Tracing Guide.


Additional Information About ACLs or nfsmapid

See the following:

UDP and TCP Negotiation

During initiation, the transport protocol is also negotiated. By default, the first connection-oriented transport that is supported on both the client and the server is selected. If this selection does not succeed, the first available connectionless transport protocol is used. The transport protocols that are supported on a system are listed in /etc/netconfig. TCP is the connection-oriented transport protocol that is supported by the release. UDP is the connectionless transport protocol.

When both the NFS protocol version and the transport protocol are determined by negotiation, the NFS protocol version is given precedence over the transport protocol. The NFS version 3 protocol that uses UDP is given higher precedence than the NFS version 2 protocol that is using TCP. You can manually select both the NFS protocol version and the transport protocol with the mount command. See the mount_nfs(1M) man page. Under most conditions, allow the negotiation to select the best options.

File Transfer Size Negotiation

The file transfer size establishes the size of the buffers that are used when transferring data between the client and the server. In general, larger transfer sizes are better. The NFS version 3 protocol has an unlimited transfer size. However, starting with the Solaris 2.6 release, the software bids a default buffer size of 32 Kbytes. The client can bid a smaller transfer size at mount time if needed, but under most conditions this bid is not necessary.

The transfer size is not negotiated with systems that use the NFS version 2 protocol. Under this condition, the maximum transfer size is set to 8 Kbytes.

You can use the -rsize and -wsize options to set the transfer size manually with the mount command. You might need to reduce the transfer size for some PC clients. Also, you can increase the transfer size if the NFS server is configured to use larger transfer sizes.


Note –

Starting in the Solaris 10 release, restrictions on wire transfer sizes have been relaxed. The transfer size is based on the capabilities of the underlying transport. For example, the NFS transfer limit for UDP is still 32 Kbytes. However, because TCP is a streaming protocol without the datagram limits of UDP, maximum transfer sizes over TCP have been increased to 1 Mbyte.


How File Systems Are Mounted

The following description applies to NFS version 3 mounts. The NFS version 4 mount process does not include the portmap service nor does it include the MOUNT protocol.

When a client needs to mount a file system from a server, the client must obtain a file handle from the server. The file handle must correspond to the file system. This process requires that several transactions occur between the client and the server. In this example, the client is attempting to mount /home/terry from the server. A snoop trace for this transaction follows.


client -> server PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
server -> client PORTMAP R GETPORT port=33492
client -> server MOUNT3 C Null
server -> client MOUNT3 R Null 
client -> server MOUNT3 C Mount /export/home9/terry
server -> client MOUNT3 R Mount OK FH=9000 Auth=unix
client -> server PORTMAP C GETPORT prog=100003 (NFS) vers=3 proto=TCP
server -> client PORTMAP R GETPORT port=2049
client -> server NFS C NULL3
server -> client NFS R NULL3 
client -> server NFS C FSINFO3 FH=9000
server -> client NFS R FSINFO3 OK
client -> server NFS C GETATTR3 FH=9000
server -> client NFS R GETATTR3 OK

In this trace, the client first requests the mount port number from the portmap service on the NFS server. After the client receives the mount port number (33492), that number is used to test the availability of the service on the server. After the client has determined that a service is running on that port number, the client then makes a mount request. When the server responds to this request, the server includes the file handle for the file system (9000) being mounted. The client then sends a request for the NFS port number. When the client receives the number from the server, the client tests the availability of the NFS service (nfsd). Also, the client requests NFS information about the file system that uses the file handle.

In the following trace, the client is mounting the file system with the public option.


client -> server NFS C LOOKUP3 FH=0000 /export/home9/terry
server -> client NFS R LOOKUP3 OK FH=9000
client -> server NFS C FSINFO3 FH=9000
server -> client NFS R FSINFO3 OK
client -> server NFS C GETATTR3 FH=9000
server -> client NFS R GETATTR3 OK

By using the default public file handle (which is 0000), all the transactions to obtain information from the portmap service and to determine the NFS port number are skipped.


Note –

NFS version 4 provides support for volatile file handles. For more information, refer to Volatile File Handles in NFS Version 4.


Effects of the -public Option and NFS URLs When Mounting

Using the -public option can create conditions that cause a mount to fail. Adding an NFS URL can also confuse the situation. The following list describes the specifics of how a file system is mounted when you use these options.

Client-Side Failover

By using client-side failover, an NFS client can be aware of multiple servers that are making the same data available and can switch to an alternate server when the current server is unavailable. The file system can become unavailable if one of the following occurs.

The failover, under these conditions, is normally transparent to the user. Thus, the failover can occur at any time without disrupting the processes that are running on the client.

Failover requires that the file system be mounted read-only. The file systems must be identical for the failover to occur successfully. See What Is a Replicated File System? for a description of what makes a file system identical. A static file system or a file system that is not changed often is the best candidate for failover.

You cannot use CacheFS and client-side failover on the same NFS mount. Extra information is stored for each CacheFS file system. This information cannot be updated during failover, so only one of these two features can be used when mounting a file system.

The number of replicas that need to be established for every file system depends on many factors. Ideally, you should have a minimum of two servers. Each server should support multiple subnets. This setup is better than having a unique server on each subnet. The process requires that each listed server be checked. Therefore, if more servers are listed, each mount is slower.

Failover Terminology

To fully comprehend the process, you need to understand two terms.

What Is a Replicated File System?

For the purposes of failover, a file system can be called a replica when each file is the same size and has the same file size or file type as the original file system. Permissions, creation dates, and other file attributes are not considered. If the file size or file types are different, the remap fails and the process hangs until the old server becomes available. In NFS version 4, the behavior is different. See Client-Side Failover in NFS Version 4.

You can maintain a replicated file system by using rdist, cpio, or another file transfer mechanism. Because updating the replicated file systems causes inconsistency, for best results consider these precautions:

Failover and NFS Locking

Some software packages require read locks on files. To prevent these products from breaking, read locks on read-only file systems are allowed but are visible to the client side only. The locks persist through a remap because the server does not “know” about the locks. Because the files should not change, you do not need to lock the file on the server side.

Client-Side Failover in NFS Version 4

In NFS version 4, if a replica cannot be established because the file sizes are different or the file types are not the same, then the following happens.


Note –

If you restart the application and try again to access the file, you should be successful.


In NFS version 4, you no longer receive replication errors for directories of different sizes. In prior versions of NFS, this condition was treated as an error and would impede the remapping process.

Furthermore, in NFS version 4, if a directory read operation is unsuccessful, the operation is performed by the next listed server. In previous versions of NFS, unsuccessful read operations would cause the remap to fail and the process to hang until the original server was available.

Large Files

Starting with the Solaris 2.6 release, the Solaris OS supports files that are over 2 Gbytes. By default, UFS file systems are mounted with the -largefiles option to support the new capability. Previous releases cannot handle files of this size. See How to Disable Large Files on an NFS Server for instructions.

If the server's file system is mounted with the -largefiles option, a Solaris 2.6 NFS client can access large files without the need for changes. However, not all Solaris 2.6 commands can handle these large files. See largefile(5) for a list of the commands that can handle the large files. Clients that cannot support the NFS version 3 protocol with the large file extensions cannot access any large files. Although clients that run the Solaris 2.5 release can use the NFS version 3 protocol, large file support was not included in that release.

How NFS Server Logging Works

NFS server logging provides records of NFS reads and writes, as well as operations that modify the file system. This data can be used to track access to information. In addition, the records can provide a quantitative way to measure interest in the information.

When a file system with logging enabled is accessed, the kernel writes raw data into a buffer file. This data includes the following:

The nfslogd daemon converts this raw data into ASCII records that are stored in log files. During the conversion, the IP addresses are modified to host names and the UIDs are modified to logins if the name service that is enabled can find matches. The file handles are also converted into path names. To accomplish the conversion, the daemon tracks the file handles and stores information in a separate file handle-to-path table. That way, the path does not have to be identified again each time a file handle is accessed. Because no changes to the mappings are made in the file handle-to-path table if nfslogd is turned off, you must keep the daemon running.


Note –

Server logging is not supported in NFS version 4.


How the WebNFS Service Works

The WebNFS service makes files in a directory available to clients by using a public file handle. A file handle is an address that is generated by the kernel that identifies a file for NFS clients. The public file handle has a predefined value, so the server does not need to generate a file handle for the client. The ability to use this predefined file handle reduces network traffic by eliminating the MOUNT protocol. This ability should also accelerate processes for the clients.

By default, the public file handle on an NFS server is established on the root file system. This default provides WebNFS access to any clients that already have mount privileges on the server. You can change the public file handle to point to any file system by using the share command.

When the client has the file handle for the file system, a LOOKUP is run to determine the file handle for the file to be accessed. The NFS protocol allows the evaluation of only one path name component at a time. Each additional level of directory hierarchy requires another LOOKUP. A WebNFS server can evaluate an entire path name with a single multi-component lookup transaction when the LOOKUP is relative to the public file handle. Multi-component lookup enables the WebNFS server to deliver the file handle to the desired file without exchanging the file handles for each directory level in the path name.

In addition, an NFS client can initiate concurrent downloads over a single TCP connection. This connection provides quick access without the additional load on the server that is caused by setting up multiple connections. Although web browser applications support concurrent downloading of multiple files, each file has its own connection. By using one connection, the WebNFS software reduces the overhead on the server.

If the final component in the path name is a symbolic link to another file system, the client can access the file if the client already has access through normal NFS activities.

Normally, an NFS URL is evaluated relative to the public file handle. The evaluation can be changed to be relative to the server's root file system by adding an additional slash to the beginning of the path. In this example, these two NFS URLs are equivalent if the public file handle has been established on the /export/ftp file system.


nfs://server/junk
nfs://server//export/ftp/junk

Note –

The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.


How WebNFS Security Negotiation Works

The Solaris 8 release includes a new protocol that enables a WebNFS client to negotiate a selected security mechanism with a WebNFS server. The new protocol uses security negotiation multi-component lookup, which is an extension to the multi-component lookup that was used in earlier versions of the WebNFS protocol.

The WebNFS client initiates the process by making a regular multi–component lookup request by using the public file handle. Because the client has no knowledge of how the path is protected by the server, the default security mechanism is used. If the default security mechanism is not sufficient, the server replies with an AUTH_TOOWEAK error. This reply indicates that the default mechanism is not valid. The client needs to use a stronger default mechanism.

When the client receives the AUTH_TOOWEAK error, the client sends a request to the server to determine which security mechanisms are required. If the request succeeds, the server responds with an array of security mechanisms that are required for the specified path. Depending on the size of the array of security mechanisms, the client might have to make more requests to obtain the complete array. If the server does not support WebNFS security negotiation, the request fails.

After a successful request, the WebNFS client selects the first security mechanism from the array that the client supports. The client then issues a regular multi-component lookup request by using the selected security mechanism to acquire the file handle. All subsequent NFS requests are made by using the selected security mechanism and the file handle.


Note –

The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.


WebNFS Limitations With Web Browser Use

Several functions that a web site that uses HTTP can provide are not supported by the WebNFS software. These differences stem from the fact that the NFS server only sends the file, so any special processing must be done on the client. If you need to have one web site configured for both WebNFS and HTTP access, consider the following issues:

Secure NFS System

The NFS environment is a powerful way and convenient way to share file systems on a network of different computer architectures and operating systems. However, the same features that make sharing file systems through NFS operation convenient also pose some security problems. Historically, most NFS implementations have used UNIX (or AUTH_SYS) authentication, but stronger authentication methods such as AUTH_DH have also been available. When using UNIX authentication, an NFS server authenticates a file request by authenticating the computer that makes the request, but not the user. Therefore, a client user can run su and impersonate the owner of a file. If DH authentication is used, the NFS server authenticates the user, making this sort of impersonation much harder.

With root access and knowledge of network programming, anyone can introduce arbitrary data into the network and extract any data from the network. The most dangerous attacks are those attacks that involve the introduction of data. An example is the impersonation of a user by generating the right packets or by recording “conversations” and replaying them later. These attacks affect data integrity. Attacks that involve passive eavesdropping, which is merely listening to network traffic without impersonating anybody, are not as dangerous, because data integrity is not compromised. Users can protect the privacy of sensitive information by encrypting data that is sent over the network.

A common approach to network security problems is to leave the solution to each application. A better approach is to implement a standard authentication system at a level that covers all applications.

The Solaris operating system includes an authentication system at the level of the remote procedure call (RPC), which is the mechanism on which the NFS operation is built. This system, known as Secure RPC, greatly improves the security of network environments and provides additional security to services such as the NFS system. When the NFS system uses the facilities that are provided by Secure RPC, it is known as a Secure NFS system.

Secure RPC

Secure RPC is fundamental to the Secure NFS system. The goal of Secure RPC is to build a system that is at minimum as secure as a time-sharing system. In a time-sharing system all users share a single computer. A time-sharing system authenticates a user through a login password. With Data Encryption Standard (DES) authentication, the same authentication process is completed. Users can log in on any remote computer just as users can log in on a local terminal. The users' login passwords are their assurance of network security. In a time-sharing environment, the system administrator has an ethical obligation not to change a password to impersonate someone. In Secure RPC, the network administrator is trusted not to alter entries in a database that stores public keys.

You need to be familiar with two terms to understand an RPC authentication system: credentials and verifiers. Using ID badges as an example, the credential is what identifies a person: a name, address, and birthday. The verifier is the photo that is attached to the badge. You can be sure that the badge has not been stolen by checking the photo on the badge against the person who is carrying the badge. In RPC, the client process sends both a credential and a verifier to the server with each RPC request. The server sends back only a verifier because the client already “knows” the server's credentials.

RPC's authentication is open ended, which means that a variety of authentication systems can be plugged into it, such as UNIX, DH, and KERB.

When UNIX authentication is used by a network service, the credentials contain the client's host name, UID, GID, and group-access list. However, the verifier contains nothing. Because no verifier exists, a superuser could falsify appropriate credentials by using commands such as su. Another problem with UNIX authentication is that UNIX authentication assumes all computers on a network are UNIX computers. UNIX authentication breaks down when applied to other operating systems in a heterogeneous network.

To overcome the problems of UNIX authentication, Secure RPC uses DH authentication.

DH Authentication

DH authentication uses the Data Encryption Standard (DES) and Diffie-Hellman public-key cryptography to authenticate both users and computers in the network. DES is a standard encryption mechanism. Diffie-Hellman public-key cryptography is a cipher system that involves two keys: one public and one secret. The public keys and secret keys are stored in the namespace. NIS stores the keys in the public-key map. These maps contain the public key and secret key for all potential users. See the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) for more information about how to set up the maps.

The security of DH authentication is based on a sender's ability to encrypt the current time, which the receiver can then decrypt and check against its own clock. The timestamp is encrypted with DES. The requirements for this scheme to work are as follows:

If a network runs a time-synchronization program, the time on the client and the server is synchronized automatically. If a time-synchronization program is not available, timestamps can be computed by using the server's time instead of the network time. The client asks the server for the time before starting the RPC session, then computes the time difference between its own clock and the server's. This difference is used to offset the client's clock when computing timestamps. If the client and server clocks become unsynchronized the server begins to reject the client's requests. The DH authentication system on the client resynchronizes with the server.

The client and server arrive at the same encryption key by generating a random conversation key, also known as the session key, and by using public-key cryptography to deduce a common key. The common key is a key that only the client and server are capable of deducing. The conversation key is used to encrypt and decrypt the client's timestamp. The common key is used to encrypt and decrypt the conversation key.

KERB Authentication

Kerberos is an authentication system that was developed at MIT. Kerberos offers a variety of encryption types, including DES. Kerberos support is no longer supplied as part of Secure RPC, but starting in the Solaris 9 release a server-side and client-side implementation is included. See Chapter 21, Introduction to the Kerberos Service, in System Administration Guide: Security Services for more information about the implementation of Kerberos authentication.

Using Secure RPC With NFS

Be aware of the following points if you plan to use Secure RPC:

Autofs Maps

Autofs uses three types of maps:

Master Autofs Map

The auto_master map associates a directory with a map. The map is a master list that specifies all the maps that autofs should check. The following example shows what an auto_master file could contain.


Example 6–3 Sample /etc/auto_master File


# Master map for automounter 
# 
+auto_master 
/net            -hosts           -nosuid,nobrowse 
/home           auto_home        -nobrowse 
/-              auto_direct     -ro  

This example shows the generic auto_master file with one addition for the auto_direct map. Each line in the master map /etc/auto_master has the following syntax:

mount-point map-name [ mount-options ]

mount-point

mount-point is the full (absolute) path name of a directory. If the directory does not exist, autofs creates the directory if possible. If the directory exists and is not empty, mounting on the directory hides its contents. In this situation, autofs issues a warning.

The notation /- as a mount point indicates that this particular map is a direct map. The notation also means that no particular mount point is associated with the map.

map-name

map-name is the map autofs uses to find directions to locations, or mount information. If the name is preceded by a slash (/), autofs interprets the name as a local file. Otherwise, autofs searches for the mount information by using the search that is specified in the name-service switch configuration file (/etc/nsswitch.conf). Special maps are also used for /net. See Mount Point /net for more information.

mount-options

mount-options is an optional, comma-separated list of options that apply to the mounting of the entries that are specified in map-name, unless the entries in map-name list other options. Options for each specific type of file system are listed in the mount man page for that file system. For example, see the mount_nfs(1M) man page for NFS-specific mount options. For NFS-specific mount points, the bg (background) and fg (foreground) options do not apply.

A line that begins with # is a comment. All the text that follows until the end of the line is ignored.

To split long lines into shorter ones, put a backslash (\) at the end of the line. The maximum number of characters of an entry is 1024.


Note –

If the same mount point is used in two entries, the first entry is used by the automount command. The second entry is ignored.


Mount Point /home

The mount point /home is the directory under which the entries that are listed in /etc/auto_home (an indirect map) are to be mounted.


Note –

Autofs runs on all computers and supports /net and /home (automounted home directories) by default. These defaults can be overridden by entries in the NIS auto.master map or NIS+ auto_master table, or by local editing of the /etc/auto_master file.


Mount Point /net

Autofs mounts under the directory /net all the entries in the special map -hosts. The map is a built-in map that uses only the hosts database. Suppose that the computer gumbo is in the hosts database and it exports any of its file systems. The following command changes the current directory to the root directory of the computer gumbo.


% cd /net/gumbo

Autofs can mount only the exported file systems of host gumbo, that is, those file systems on a server that are available to network users instead of those file systems on a local disk. Therefore, all the files and directories on gumbo might not be available through /net/gumbo.

With the /net method of access, the server name is in the path and is location dependent. If you want to move an exported file system from one server to another, the path might no longer work. Instead, you should set up an entry in a map specifically for the file system you want rather than use /net.


Note –

Autofs checks the server's export list only at mount time. After a server's file systems are mounted, autofs does not check with the server again until the server's file systems are automatically unmounted. Therefore, newly exported file systems are not “seen” until the file systems on the client are unmounted and then remounted.


Direct Autofs Maps

A direct map is an automount point. With a direct map, a direct association exists between a mount point on the client and a directory on the server. Direct maps have a full path name and indicate the relationship explicitly. The following is a typical /etc/auto_direct map:


/usr/local          -ro \
   /bin                   ivy:/export/local/sun4 \
   /share                 ivy:/export/local/share \
   /src                   ivy:/export/local/src
/usr/man            -ro   oak:/usr/man \
                          rose:/usr/man \
                          willow:/usr/man 
/usr/games          -ro   peach:/usr/games 
/usr/spool/news     -ro   pine:/usr/spool/news \
                          willow:/var/spool/news 

Lines in direct maps have the following syntax:

key [ mount-options ] location

key

key is the path name of the mount point in a direct map.

mount-options

mount-options is the options that you want to apply to this particular mount. These options are required only if the options differ from the map default. Options for each specific type of file system are listed in the mount man page for that file system. For example, see the mount_cachefs(1M) man page for CacheFS specific mount options. For information about using CacheFS options with different versions of NFS, see Accessing NFS File Systems Using CacheFS.

location

location is the location of the file system. One or more file systems are specified as server:pathname for NFS file systems or :devicename for High Sierra file systems (HSFS).


Note –

The pathname should not include an automounted mount point. The pathname should be the actual absolute path to the file system. For instance, the location of a home directory should be listed as server:/export/home/username, not as server:/home/username.


As in the master map, a line that begins with # is a comment. All the text that follows until the end of the line is ignored. Put a backslash at the end of the line to split long lines into shorter ones.

Of all the maps, the entries in a direct map most closely resemble the corresponding entries in /etc/vfstab. An entry might appear in /etc/vfstab as follows:


dancer:/usr/local - /usr/local/tmp nfs - yes ro 

The equivalent entry appears in a direct map as follows:


/usr/local/tmp     -ro     dancer:/usr/local

Note –

No concatenation of options occurs between the automounter maps. Any options that are added to an automounter map override all options that are listed in maps that are searched earlier. For instance, options that are included in the auto_master map would be overridden by corresponding entries in any other map.


See How Autofs Selects the Nearest Read-Only Files for Clients (Multiple Locations) for other important features that are associated with this type of map.

Mount Point /-

In Example 6–3, the mount point /- tells autofs not to associate the entries in auto_direct with any specific mount point. Indirect maps use mount points that are defined in the auto_master file. Direct maps use mount points that are specified in the named map. Remember, in a direct map the key, or mount point, is a full path name.

An NIS or NIS+ auto_master file can have only one direct map entry because the mount point must be a unique value in the namespace. An auto_master file that is a local file can have any number of direct map entries if entries are not duplicated.

Indirect Autofs Maps

An indirect map uses a substitution value of a key to establish the association between a mount point on the client and a directory on the server. Indirect maps are useful for accessing specific file systems, such as home directories. The auto_home map is an example of an indirect map.

Lines in indirect maps have the following general syntax:

key [ mount-options ] location

key

key is a simple name without slashes in an indirect map.

mount-options

mount-options is the options that you want to apply to this particular mount. These options are required only if the options differ from the map default. Options for each specific type of file system are listed in the mount man page for that file system. For example, see the mount_nfs(1M) man page for NFS-specific mount options.

location

location is the location of the file system. One or more file systems are specified as server:pathname.


Note –

The pathname should not include an automounted mount point. The pathname should be the actual absolute path to the file system. For instance, the location of a directory should be listed as server:/usr/local, not as server:/net/server/usr/local.


As in the master map, a line that begins with # is a comment. All the text that follows until the end of the line is ignored. Put a backslash (\) at the end of the line to split long lines into shorter ones. Example 6–3 shows an auto_master map that contains the following entry:


/home      auto_home        -nobrowse    

auto_home is the name of the indirect map that contains the entries to be mounted under /home. A typical auto_home map might contain the following:


david                  willow:/export/home/david
rob                    cypress:/export/home/rob
gordon                 poplar:/export/home/gordon
rajan                  pine:/export/home/rajan
tammy                  apple:/export/home/tammy
jim                    ivy:/export/home/jim
linda    -rw,nosuid    peach:/export/home/linda

As an example, assume that the previous map is on host oak. Suppose that the user linda has an entry in the password database that specifies her home directory as /home/linda. Whenever linda logs in to computer oak, autofs mounts the directory /export/home/linda that resides on the computer peach. Her home directory is mounted read-write, nosuid.

Assume the following conditions occur: User linda's home directory is listed in the password database as /home/linda. Anybody, including Linda, has access to this path from any computer that is set up with the master map referring to the map in the previous example.

Under these conditions, user linda can run login or rlogin on any of these computers and have her home directory mounted in place for her.

Furthermore, now Linda can also type the following command:


% cd ~david

autofs mounts David's home directory for her (if all permissions allow).


Note –

No concatenation of options occurs between the automounter maps. Any options that are added to an automounter map override all options that are listed in maps that are searched earlier. For instance, options that are included in the auto_master map are overridden by corresponding entries in any other map.


On a network without a name service, you have to change all the relevant files (such as /etc/passwd) on all systems on the network to allow Linda access to her files. With NIS, make the changes on the NIS master server and propagate the relevant databases to the slave servers. On a network that is running NIS+, propagating the relevant databases to the slave servers is done automatically after the changes are made.

How Autofs Works

Autofs is a client-side service that automatically mounts the appropriate file system. The components that work together to accomplish automatic mounting are the following:

The automount service, svc:/system/filesystem/autofs, which is called at system startup time, reads the master map file auto_master to create the initial set of autofs mounts. These autofs mounts are not automatically mounted at startup time. These mounts are points under which file systems are mounted in the future. These points are also known as trigger nodes.

After the autofs mounts are set up, these mounts can trigger file systems to be mounted under them. For example, when autofs receives a request to access a file system that is not currently mounted, autofs calls automountd, which actually mounts the requested file system.

After initially mounting autofs mounts, the automount command is used to update autofs mounts as necessary. The command compares the list of mounts in the auto_master map with the list of mounted file systems in the mount table file /etc/mnttab (formerly /etc/mtab). automount then makes the appropriate changes. This process allows system administrators to change mount information within auto_master and have those changes used by the autofs processes without stopping and restarting the autofs daemon. After the file system is mounted, further access does not require any action from automountd until the file system is automatically unmounted.

Unlike mount, automount does not read the /etc/vfstab file (which is specific to each computer) for a list of file systems to mount. The automount command is controlled within a domain and on computers through the namespace or local files.

The following is a simplified overview of how autofs works.

The automount daemon automountd is started at boot time by the service svc:/system/filesystem/autofs. See Figure 6–3. This service also runs the automount command, which reads the master map and installs autofs mount points. See How Autofs Starts the Navigation Process (Master Map) for more information.

Figure 6–3 svc:/system/filesystem/autofs Service Starts automount

The context describes the graphic.

Autofs is a kernel file system that supports automatic mounting and unmounting.

    When a request is made to access a file system at an autofs mount point, the following occurs:

  1. Autofs intercepts the request.

  2. Autofs sends a message to the automountd for the requested file system to be mounted.

  3. automountd locates the file system information in a map, creates the trigger nodes, and performs the mount.

  4. Autofs allows the intercepted request to proceed.

  5. Autofs unmounts the file system after a period of inactivity.


Note –

Mounts that are managed through the autofs service should not be manually mounted or unmounted. Even if the operation is successful, the autofs service does not check that the object has been unmounted, resulting in possible inconsistencies. A reboot clears all the autofs mount points.


How Autofs Navigates Through the Network (Maps)

Autofs searches a series of maps to navigate through the network. Maps are files that contain information such as the password entries of all users on a network or the names of all host computers on a network. Effectively, the maps contain network-wide equivalents of UNIX administration files. Maps are available locally or through a network name service such as NIS or NIS+. You create maps to meet the needs of your environment by using the Solaris Management Console tools. See Modifying How Autofs Navigates the Network (Modifying Maps).

How Autofs Starts the Navigation Process (Master Map)

The automount command reads the master map at system startup. Each entry in the master map is a direct map name or an indirect map name, its path, and its mount options, as shown in Figure 6–4. The specific order of the entries is not important. automount compares entries in the master map with entries in the mount table to generate a current list.

Figure 6–4 Navigation Through the Master Map

The context describes the graphic.

Autofs Mount Process

What the autofs service does when a mount request is triggered depends on how the automounter maps are configured. The mount process is generally the same for all mounts. However, the final result changes with the mount point that is specified and the complexity of the maps. Starting with the Solaris 2.6 release, the mount process has also been changed to include the creation of the trigger nodes.

Simple Autofs Mount

To help explain the autofs mount process, assume that the following files are installed.


$ cat /etc/auto_master
# Master map for automounter
#
+auto_master
/net        -hosts        -nosuid,nobrowse
/home       auto_home     -nobrowse
/share      auto_share
$ cat /etc/auto_share
# share directory map for automounter
#
ws          gumbo:/export/share/ws

When the /share directory is accessed, the autofs service creates a trigger node for /share/ws, which is an entry in /etc/mnttab that resembles the following entry:


-hosts  /share/ws     autofs  nosuid,nobrowse,ignore,nest,dev=###

    When the /share/ws directory is accessed, the autofs service completes the process with these steps:

  1. Checks the availability of the server's mount service.

  2. Mounts the requested file system under /share. Now the /etc/mnttab file contains the following entries.


    -hosts  /share/ws     autofs  nosuid,nobrowse,ignore,nest,dev=###
    gumbo:/export/share/ws /share/ws   nfs   nosuid,dev=####    #####

Hierarchical Mounting

When multiple layers are defined in the automounter files, the mount process becomes more complex. Suppose that you expand the /etc/auto_shared file from the previous example to contain the following:


# share directory map for automounter
#
ws       /       gumbo:/export/share/ws
         /usr    gumbo:/export/share/ws/usr

The mount process is basically the same as the previous example when the /share/ws mount point is accessed. In addition, a trigger node to the next level (/usr) is created in the /share/ws file system so that the next level can be mounted if it is accessed. In this example, /export/share/ws/usr must exist on the NFS server for the trigger node to be created.


Caution – Caution –

Do not use the -soft option when specifying hierarchical layers. Refer to Autofs Unmounting for an explanation of this limitation.


Autofs Unmounting

The unmounting that occurs after a certain amount of idle time is from the bottom up (reverse order of mounting). If one of the directories at a higher level in the hierarchy is busy, only file systems below that directory are unmounted. During the unmounting process, any trigger nodes are removed and then the file system is unmounted. If the file system is busy, the unmount fails and the trigger nodes are reinstalled.


Caution – Caution –

Do not use the -soft option when specifying hierarchical layers. If the -soft option is used, requests to reinstall the trigger nodes can time out. The failure to reinstall the trigger nodes leaves no access to the next level of mounts. The only way to clear this problem is to have the automounter unmount all of the components in the hierarchy. The automounter can complete the unmounting either by waiting for the file systems to be automatically unmounted or by rebooting the system.


How Autofs Selects the Nearest Read-Only Files for Clients (Multiple Locations)

The example direct map contains the following:


/usr/local          -ro \
   /bin                   ivy:/export/local/sun4\
   /share                 ivy:/export/local/share\
   /src                   ivy:/export/local/src
/usr/man            -ro   oak:/usr/man \
                          rose:/usr/man \
                          willow:/usr/man
/usr/games          -ro   peach:/usr/games
/usr/spool/news     -ro   pine:/usr/spool/news \
                          willow:/var/spool/news 

The mount points /usr/man and /usr/spool/news list more than one location, three locations for the first mount point, two locations for the second mount point. Any of the replicated locations can provide the same service to any user. This procedure is sensible only when you mount a file system that is read-only, as you must have some control over the locations of files that you write or modify. You want to avoid modifying files on one server on one occasion and, minutes later, modifying the “same” file on another server. The benefit is that the best available server is used automatically without any effort that is required by the user.

If the file systems are configured as replicas (see What Is a Replicated File System?), the clients have the advantage of using failover. Not only is the best server automatically determined, but if that server becomes unavailable, the client automatically uses the next-best server. Failover was first implemented in the Solaris 2.6 release.

An example of a good file system to configure as a replica is man pages. In a large network, more than one server can export the current set of man pages. Which server you mount the man pages from does not matter if the server is running and exporting its file systems. In the previous example, multiple mount locations are expressed as a list of mount locations in the map entry.


/usr/man -ro oak:/usr/man rose:/usr/man willow:/usr/man 

In this example, you can mount the man pages from the servers oak, rose, or willow. Which server is best depends on a number of factors, including the following:

During the sorting process, a count is taken of the number of servers that support each version of the NFS protocol. Whichever version of the protocol is supported on the most servers becomes the protocol that is used by default. This selection provides the client with the maximum number of servers to depend on.

After the largest subset of servers with the same version of the protocol is found, that server list is sorted by proximity. To determine proximity IPv4 addresses are inspected. The IPv4 addresses show which servers are in each subnet. Servers on a local subnet are given preference over servers on a remote subnet. Preference for the closest server reduces latency and network traffic.


Note –

Proximity cannot be determined for replicas that are using IPv6 addresses.


Figure 6–5 illustrates server proximity.

Figure 6–5 Server Proximity

The context describes the graphic.

If several servers that support the same protocol are on the local subnet, the time to connect to each server is determined and the fastest server is used. The sorting can also be influenced by using weighting (see Autofs and Weighting).

For example, if version 4 servers are more abundant, version 4 becomes the protocol that is used by default. However, now the sorting process is more complex. Here are some examples of how the sorting process works:


Note –

Weighting is also influenced by keyword values in the /etc/default/nfs file. Specifically, values for NFS_SERVER_VERSMIN, NFS_CLIENT_VERSMIN, NFS_SERVER_VERSMAX, and NFS_CLIENT_VERSMAX can make some versions be excluded from the sorting process. For more information about these keywords, see Keywords for the /etc/default/nfs File.


With failover, the sorting is checked at mount time when a server is selected. Multiple locations are useful in an environment where individual servers might not export their file systems temporarily.

Failover is particularly useful in a large network with many subnets. Autofs chooses the appropriate server and is able to confine NFS network traffic to a segment of the local network. If a server has multiple network interfaces, you can list the host name that is associated with each network interface as if the interface were a separate server. Autofs selects the nearest interface to the client.


Note –

No weighting and no proximity checks are performed with manual mounts. The mount command prioritizes the servers that are listed from left to right.


For more information, see automount(1M) man page.

Autofs and Weighting

You can influence the selection of servers at the same proximity level by adding a weighting value to the autofs map. For example:


/usr/man -ro oak,rose(1),willow(2):/usr/man

The numbers in parentheses indicate a weighting. Servers without a weighting have a value of zero and, therefore, are most likely to be selected. The higher the weighting value, the lower the chance that the server is selected.


Note –

All other server selection factors are more important than weighting. Weighting is only considered when selecting between servers with the same network proximity.


Variables in a Map Entry

You can create a client-specific variable by prefixing a dollar sign ($) to its name. The variable helps you to accommodate different architecture types that are accessing the same file-system location. You can also use curly braces to delimit the name of the variable from appended letters or digits. Table 6–2 shows the predefined map variables.

Table 6–2 Predefined Map Variables

Variable 

Meaning 

Derived From 

Example 

ARCH

Architecture type 

uname -m

sun4

CPU

Processor type 

uname -p

sparc

HOST

Host name 

uname -n

dinky

OSNAME

Operating system name 

uname -s

SunOS

OSREL

Operating system release 

uname -r

5.8

OSVERS

Operating system version (version of the release) 

uname -v

GENERIC

You can use variables anywhere in an entry line except as a key. For instance, suppose that you have a file server that exports binaries for SPARC and x86 architectures from /usr/local/bin/sparc and /usr/local/bin/x86 respectively. The clients can mount through a map entry such as the following:


/usr/local/bin	   -ro	server:/usr/local/bin/$CPU

Now the same entry for all clients applies to all architectures.


Note –

Most applications that are written for any of the sun4 architectures can run on all sun4 platforms. The -ARCH variable is hard-coded to sun4.


Maps That Refer to Other Maps

A map entry +mapname that is used in a file map causes automount to read the specified map as if it were included in the current file. If mapname is not preceded by a slash, autofs treats the map name as a string of characters and uses the name-service switch policy to find the map name. If the path name is an absolute path name, automount checks a local map of that name. If the map name starts with a dash (-), automount consults the appropriate built-in map, such as hosts.

This name-service switch file contains an entry for autofs that is labeled as automount, which contains the order in which the name services are searched. The following file is an example of a name-service switch file.


#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf;
# it uses NIS (YP) in conjunction with files.
#
# "hosts:" and "services:" in this file are used only if the /etc/netconfig
# file contains "switch.so" as a nametoaddr library for "inet" transports.
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:         files nis
group:          files nis

# consult /etc "files" only if nis is down.
hosts:          nis [NOTFOUND=return] files
networks:       nis [NOTFOUND=return] files
protocols:      nis [NOTFOUND=return] files
rpc:            nis [NOTFOUND=return] files
ethers:         nis [NOTFOUND=return] files
netmasks:       nis [NOTFOUND=return] files
bootparams:     nis [NOTFOUND=return] files
publickey:      nis [NOTFOUND=return] files
netgroup:       nis
automount:      files nis
aliases:        files nis
# for efficient getservbyname() avoid nis
services:       files nis 

In this example, the local maps are searched before the NIS maps. Therefore, you can have a few entries in your local /etc/auto_home map for the most commonly accessed home directories. You can then use the switch to fall back to the NIS map for other entries.


bill               cs.csc.edu:/export/home/bill
bonny              cs.csc.edu:/export/home/bonny

After consulting the included map, if no match is found, automount continues scanning the current map. Therefore, you can add more entries after a + entry.


bill               cs.csc.edu:/export/home/bill
bonny              cs.csc.edu:/export/home/bonny
+auto_home 

The map that is included can be a local file or a built-in map. Remember, only local files can contain + entries.


+auto_home_finance      # NIS+ map
+auto_home_sales        # NIS+ map
+auto_home_engineering  # NIS+ map
+/etc/auto_mystuff      # local map
+auto_home              # NIS+ map
+-hosts                 # built-in hosts map 

Note –

You cannot use + entries in NIS+ or NIS maps.


Executable Autofs Maps

You can create an autofs map that executes some commands to generate the autofs mount points. You could benefit from using an executable autofs map if you need to be able to create the autofs structure from a database or a flat file. The disadvantage to using an executable map is that the map needs to be installed on each host. An executable map cannot be included in either the NIS or the NIS+ name service.

The executable map must have an entry in the auto_master file.


/execute    auto_execute

Here is an example of an executable map:


#!/bin/ksh
#
# executable map for autofs
#

case $1 in
	         src)  echo '-nosuid,hard bee:/export1' ;;
esac

For this example to work, the file must be installed as /etc/auto_execute and must have the executable bit set. Set permissions to 744. Under these circumstances, running the following command causes the /export1 file system from bee to be mounted:


% ls /execute/src

Modifying How Autofs Navigates the Network (Modifying Maps)

You can modify, delete, or add entries to maps to meet the needs of your environment. As applications and other file systems that users require change their location, the maps must reflect those changes. You can modify autofs maps at any time. Whether your modifications are effective the next time automountd mounts a file system depends on which map you modify and what kind of modification you make.

Default Autofs Behavior With Name Services

At boot time autofs is invoked by the service svc:/system/filesystem/autofs and autofs checks for the master auto_master map. Autofs is subject to the rules that are discussed subsequently.

Autofs uses the name service that is specified in the automount entry of the /etc/nsswitch.conf file. If NIS+ is specified, as opposed to local files or NIS, all map names are used as is. If NIS is selected and autofs cannot find a map that autofs needs, but finds a map name that contains one or more underscores, the underscores are changed to dots. This change allows the old NIS file names to work. Then autofs checks the map again, as shown in Figure 6–6.

Figure 6–6 How Autofs Uses the Name Service

The context describes the graphic.

The screen activity for this session would resemble the following example.


$ grep /home /etc/auto_master
/home           auto_home

$ ypmatch brent auto_home
Can't match key brent in map auto_home.  Reason: no such map in
server's domain.

$ ypmatch brent auto.home
diskus:/export/home/diskus1/&

If “files” is selected as the name service, all maps are assumed to be local files in the /etc directory. Autofs interprets a map name that begins with a slash (/) as local regardless of which name service autofs uses.

Autofs Reference

The remaining sections of this chapter describe more advanced autofs features and topics.

Autofs and Metacharacters

Autofs recognizes some characters as having a special meaning. Some characters are used for substitutions, and some characters are used to protect other characters from the autofs map parser.

Ampersand (&)

If you have a map with many subdirectories specified, as in the following, consider using string substitutions.


john        willow:/home/john
mary        willow:/home/mary
joe         willow:/home/joe
able        pine:/export/able
baker       peach:/export/baker

You can use the ampersand character (&) to substitute the key wherever the key appears. If you use the ampersand, the previous map changes to the following:


john        willow:/home/&
mary        willow:/home/&
joe         willow:/home/&
able        pine:/export/&
baker       peach:/export/&

You could also use key substitutions in a direct map, in situations such as the following:


/usr/man						willow,cedar,poplar:/usr/man

You can also simplify the entry further as follows:


/usr/man						willow,cedar,poplar:&

Notice that the ampersand substitution uses the whole key string. Therefore, if the key in a direct map starts with a / (as it should), the slash is included in the substitution. Consequently, for example, you could not do the following:


/progs				&1,&2,&3:/export/src/progs 

The reason is that autofs would interpret the example as the following:


/progs 				/progs1,/progs2,/progs3:/export/src/progs

Asterisk (*)

You can use the universal substitute character, the asterisk (*), to match any key. You could mount the /export file system from all hosts through this map entry.


*						&:/export

Each ampersand is substituted by the value of any given key. Autofs interprets the asterisk as an end-of-file character.

Autofs and Special Characters

If you have a map entry that contains special characters, you might have to mount directories that have names that confuse the autofs map parser. The autofs parser is sensitive to names that contain colons, commas, and spaces, for example. These names should be enclosed in double-quotes, as in the following:


/vms    -ro    vmsserver: -  -  - "rc0:dk1 - "
/mac    -ro    gator:/ - "Mr Disk - "