Go to main content

Managing Network File Systems in Oracle® Solaris 11.3

Exit Print View

Updated: September 2018
 
 

Features of the NFS Service

This section describes important features of the NFS service.

NFS Version 2 Protocol

NFS Version 2, the first version of the NFS protocol is widely used. All Oracle Solaris releases support the NFS Version 2 protocol.

NFS Version 3 Protocol

Unlike the NFS Version 2 protocol, the NFS Version 3 protocol can handle files that are larger than 2 GB. For information about handling large files in NFS, see NFS Large File Support.

The NFS Version 3 protocol enables safe asynchronous writes on the server, which improves performance by allowing the server to cache client write requests in memory. The client no longer waits for the server to commit the changes to disk, so the response time is faster. Also, the server can batch the requests, which improves the response time on the server.

Many Oracle Solaris NFS Version 3 operations return the file attributes, which are stored in the local cache. Because the cache is updated more often, the requirement to perform a separate operation to update this data arises less often. Therefore, the number of Remote Procedure Calls (RPC) to the server is reduced, improving performance.

The process for verifying file access permissions has been improved. Version 2 generated a "write error" message or a "read error" message if users tried to copy a remote file without the appropriate permissions. In Version 3, the permissions are checked before the file is opened, so the error is reported as an "open error".

The NFS Version 3 protocol removes the 8 KB transfer size limit. Clients and servers can negotiate whatever transfer size the clients and servers support, rather than conform to the 8 KB limit that Version 2 imposed. Note that in earlier Oracle Solaris implementations, the protocol defaulted to a 32 KB transfer size. Starting with the Oracle Solaris 10 release, restrictions on wire transfer sizes were relaxed. The transfer size is based on the capabilities of the underlying transport.

NFS Version 4 Protocol

    The NFS Version 4 protocol represents the user ID and the group ID as strings. The nfsmapid daemon is used by the NFS Version 4 client and server for the following mappings:

  • Map the user ID and group ID strings to local numeric IDs

  • Map the local numeric IDs to user ID and group ID strings

For more information about the nfsmapid daemon, see NFS Daemons.

Note that in NFS Version 4, the nfsmapid daemon, is used to map user IDs or group IDs in Access Control List (ACL) entries on a server to user IDs or group IDs in ACL entries on a client. The reverse is also true. For more information about user ID and group ID mapping, see ACLs and nfsmapid in NFS Version 4 and NFS ACL Support.

With NFS Version 4, when you unshare a file system, all the state information for any open files or file locks in that file system is destroyed. In NFS Version 3, the server maintains any locks that the clients had obtained before the file system was unshared. For more information about unsharing a file system in NFS Version 4, see Unsharing and Resharing a File System in NFS Version 4.

NFS Version 4 servers use a pseudo file system to provide clients with access to exported objects on the server. For more information about pseudo file system, see File System Namespace in NFS Version 4. NFS Version 4 supports volatile file handles. For more information, see Volatile File Handles in NFS Version 4.

Delegation, a technique by which the server delegates the management of a file to a client, is supported on both the client and the server. For example, the server could grant either a read delegation or a write delegation to a client. For more information about delegation, see Delegation in NFS Version 4.

NFS Version 4 does not support LIPKEY/SPKM security.

    Also, NFS Version 4 does not use the following daemons:

  • lockd

  • nfslogd

  • statd

For a complete list of the features in NFS Version 4, see Features in NFS Version 4.

For information about setting up the NFS services, see Setting Up the NFS Service.

Controlling NFS Versions

The SMF repository includes parameters to control the NFS protocols that are used by both the client and the server. For example, you can use parameters to manage version negotiation. For more information about the client and server parameters, see NFS Daemons. For more information about the parameter values for NFS daemons, see the nfs(4) man page.

NFS ACL Support

Access control list (ACL) provides a mechanism to set file access permissions instead of using the standard UNIX file permissions. NFS ACL support provides a method of changing and viewing ACL entries from an Oracle Solaris NFS client to an Oracle Solaris NFS server.

The NFS Version 2 and Version 3 implementations support the old POSIX-draft style ACLs. POSIX-draft ACLs are natively supported by UFS. For more information about POSIX-draft ACLs, see Using Access Control Lists to Protect UFS Files in Securing Files and Verifying File Integrity in Oracle Solaris 11.3.

The NFS Version 4 protocol supports NFS Version 4 style ACLs. NFS Version 4 ACLs are natively supported by Oracle Solaris ZFS. You must use ZFS as the underlying file system on the NFS Version 4 server for full featured NFS Version 4 ACL functionality. NFS Version 4 ACLs have a rich set of inheritance properties, as well as a set of permission bits beyond the standard read, write, and execute. For more information about using ACLs to protect ZFS files, see Setting ACLs on ZFS Files in Securing Files and Verifying File Integrity in Oracle Solaris 11.3. For more information about support for ACLs in NFS Version 4, see ACLs and nfsmapid in NFS Version 4.

NFS Over TCP

The default transport protocol for the NFS protocol is TCP (Transmission Control Protocol). TCP helps performance on slow networks and wide area networks. TCP also provides congestion control and error recovery. NFS over TCP works with the NFS Version 2, NFS Version 3, and NFS Version 4 protocols.


Note -  If InfiniBand hardware is available on the system, the default transport protocol changes from TCP to the Remote Direct Memory Access (RDMA) protocol. For more information, see Overview of NFS Over RDMA and NFS Over RDMA. Note that, if you use the –proto=tcp mount option, NFS mounts are forced to use TCP only.

NFS Over UDP

Starting with the Oracle Solaris 11 release, the NFS client uses only one UDP (User Datagram Protocol) reserved port, which is configurable. The system can be configured to use more than one port to increase system performance. This capability mirrors NFS over TCP support, which has been configurable in this way since its inception. For more information about tuning the NFS environment, see Oracle Solaris 11.3 Tunable Parameters Reference Manual.

Overview of NFS Over RDMA

If InfiniBand hardware is available on the system, the default transport protocol changes from TCP to the RDMA protocol. The RDMA protocol is a technology for memory-to-memory transfer of data over high-speed networks. Specifically, RDMA provides remote data transfer directly to and from memory without CPU intervention. To provide this capability, RDMA combines the interconnect I/O technology of InfiniBand with the Oracle Solaris OS. However, if you use the –proto=tcp mount option, NFS mounts are forced to use TCP only. For more information about using the RDMA protocol for NFS, see NFS Over RDMA.

Network Lock Manager and NFS

The Network Lock Manager provides UNIX record locking for any files being shared over NFS. The locking mechanism enables clients to synchronize their I/O requests with other clients, ensuring data integrity.


Note -  The Network Lock Manager is used only for NFS Version 2 and NFS Version 3 mounts. File locking is built into the NFS Version 4 protocol.

NFS Large File Support

The NFS Version 3 protocol can handle files that are larger than 2 GB, but the NFS Version 2 protocol cannot.

NFS Client Failover

Dynamic failover of read-only file systems provide a high level of availability for read-only resources that are already replicated, such as man pages, other documentation, and shared binaries. Failover can occur any time after the file system is mounted. Manual mounts can now list multiple replicas, much like the automounter in previous releases. The automounter has not changed, except that failover no longer waits until the file system is remounted. For more information, see How to Use Client-Side Failover and Client-Side Failover.

Kerberos Support for the NFS Service

The NFS service supports Kerberos Version 5 authentication, integrity, and privacy when you configure NFS clients and servers to support Kerberos. You can use the mount and share command-line options when you use Kerberos for secure authentication. For information about Kerberos Version 5 authentication, see Configuring Kerberos NFS Servers in Managing Kerberos and Other Authentication Services in Oracle Solaris 11.3.

WebNFS Support

WebNFS provides the capability to make a file system on the Internet accessible through firewalls. This capability uses an extension to the NFS protocol. One advantage of using the WebNFS protocol for Internet access is its reliability. The service is built as an extension of the NFS Version 3 and Version 2 protocol. Additionally, WebNFS enables you to share files without the administrative overhead of an anonymous ftp site. For more information about WebNFS, see Security Negotiation for the WebNFS Service and Administering WebNFS.


Note -  The NFS Version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.

RPCSEC_GSS Security Flavor

A security flavor called RPCSEC_GSS uses the standard GSS-API interfaces to provide authentication, integrity, and privacy, as well as enabling support of multiple security mechanisms. For more information about support of Kerberos V5 authentication, see Kerberos Support for the NFS Service. For more information about GSS-API, see Chapter 4, Writing Applications That Use GSS-API in Developer’s Guide to Oracle Solaris 11.3 Security.

Extensions for NFS Mounting

The NFS service provides extensions to the mount and automountd commands in Oracle Solaris. These extensions enable the mount request to use the public file handle instead of the MOUNT protocol. The WebNFS service uses the MOUNT protocol as the access method. By using the public file handle, the mount can occur through a firewall. As there are fewer transactions between the server and the client, the mount occurs faster.

The extensions also enable NFS URLs to be used instead of the standard path name. Also, you can use the –public option with the mount command and the automounter maps to force the use of the public file handle. For more information about the WebNFS service, see WebNFS Support.

Security Negotiation for the WebNFS Service

The NFS service enables a WebNFS client to negotiate a security mechanism with an NFS server. WebNFS client uses a protocol to negotiate a security mechanism with an NFS server. This protocol enables you to use secure transactions with the WebNFS service. For more information about security negotiation for WebNFS, see How WebNFS Security Negotiation Works.

NFS Server Logging


Note -  NFS Version 4 does not support the server logging feature.

NFS server logging enables an NFS server to provide a record of file operations that have been performed on its file systems. The record includes information about which file was accessed, when the file was accessed, and who accessed the file. You can specify the location of the logs that contain this information through a set of configuration options. You can also use these options to select the operations to be logged. The NFS server logging feature is particularly useful for sites that make anonymous FTP archives available to NFS and WebNFS clients. For more information, see How to Enable NFS Server Logging.

Autofs Features

Autofs works with file systems that are specified in the local namespace. This information can be maintained in LDAP, NIS (Network Information Service), or local files. Autofs supports the following features:

  • A fully multithreaded version of the automountd feature capability makes autofs reliable. This feature enables concurrent servicing of multiple mounts, which prevents the service from hanging if a server is unavailable.

  • The automountd feature also provides on-demand mounting. Only the top file system is mounted. Other file systems that are related to this mount point are mounted when needed.

  • The autofs service supports the “browsability” of indirect maps. This support enables a user to see which directories could be mounted without having to actually mount each file system. A –nobrowse option ensures that large file systems, such as /net and /home, are not automatically browsable. Also, you can turn off autofs browsability on each client by using the –n option with the automount command. For more information about different methods to disable autofs browsability, see Disabling Autofs Browsability.