mount_nfs - mount remote NFS resources
mount [-F nfs] [generic_options] [-o specific_options] [-O] resource
mount [-F nfs] [generic_options] [-o specific_options] [-O] mount_point
mount [-F nfs] [generic_options] [-o specific_options] [-O] resource mount_point
The mount utility attaches a named resource to the file system hierarchy at the pathname location mount_point, which must already exist. If mount_point has any contents prior to the mount operation, the contents remain hidden until the resource is once again unmounted.
If the resource is listed in the /etc/vfstab file, the command line can specify either resource or mount_point, and mount consults /etc/vfstab for more information. If the –F option is omitted, mount takes the file system type from /etc/vfstab.
If the resource is not listed in the /etc/vfstab file, then the command line must specify both the resource and the mount_point.
host can be an IPv4 or IPv6 address string. As IPv6 addresses already contain colons, enclose host in a pair of square brackets when specifying an IPv6 address string. Otherwise the first occurrence of a colon can be interpreted as the separator between the host name and path, for example, [1080::8:800:200C:417A]:tmp/file. See inet(4P) and inet6(4P).
Where host is the name of the NFS server host, and pathname is the path name of the directory on the server being mounted. The path name is interpreted according to the server's path name parsing rules and is not necessarily slash-separated, though on most servers, this is the case.
This is an NFS URL and follows the standard convention for NFS URLs as described in NFS URL Scheme, RFC 2224. See the discussion of URLs and the public option under NFS FILE SYSTEMS for a more detailed discussion.
host:pathname is a comma-separated list of host:pathname.
See the discussion of replicated file systems and failover under NFS FILE SYSTEMS for a more detailed discussion.
hostlist is a comma-separated list of hosts.
See the discussion of replicated file systems and failover under NFS FILE SYSTEMS for a more detailed discussion.
The mount command maintains a table of mounted file systems in /etc/mnttab, described in mnttab(5).
mount_nfs supports both NFSv3 and NFSv4 mounts. The default NFS version is NFSv4.
The NFS client service is managed by the Service Management Facility (SMF) under the following service identifier:
See the smf(7) man page for more information about SMF. Use the svcs(1) command to query the status of the service. Administrative actions such as enabling, disabling, or restarting the service can be performed by using the svcadm(8) command.
Set file system specific options according to a comma-separated list with no intervening spaces.
Hold cached attributes for no more than n seconds after directory update. The default value is 60.
Hold cached attributes for at least n seconds after directory update. The default value is 30.
Hold cached attributes for no more than n seconds after file modification. The default value is 60.
Hold cached attributes for at least n seconds after file modification. The default value is 3.
Set min and max times for regular files and directories to n seconds. See “File Attributes,” below, for a description of the effect of setting this option to 0.
See “Specifying Values for Attribute Cache Duration Options,” below, for a description of how acdirmax, acdirmin, acregmax, acregmin, and actimeo are parsed on a mount command line.
If the first attempt fails, retry in the background, or, in the foreground. The default is fg.
If forcedirectio is specified, then for the duration of the mount, forced direct I/O is used. If the filesystem is mounted using forcedirectio, data is transferred directly between client and server, with no buffering on the client. If the filesystem is mounted using noforcedirectio, data is buffered on the client. forcedirectio is a performance option that is of benefit only in large sequential data transfers. The default behavior is noforcedirectio.
By default, the GID associated with a newly created file obeys the System V semantics; that is, the GID is set to the effective GID of the calling process. This behavior can be overridden on a per-directory basis by setting the set-GID bit of the parent directory; in this case, the GID of a newly created file is set to the GID of the parent directory (see open(2) and mkdir(2)). Files created on file systems that are mounted with the grpid option obeys BSD semantics independent of whether the set-GID bit of the parent directory is set; that is, the GID is unconditionally inherited from that of the parent directory.
Continue to retry requests until the server responds (hard) or give up and return an error (soft). The default value is hard. Note that NFSv4 clients do not support soft mounts. If a user specifies soft option for NFSv4 mount, this mount option is silently ignored.
Allow (do not allow) keyboard interrupts to kill a process that is hung while waiting for a response on a hard-mounted file system. The default is intr, which makes it possible for clients to interrupt applications that can be waiting for a remote mount.
Use local locking (no lock manager). Note that this is a private interface.
Suppress data and attribute caching. The data caching that is suppressed is the write-behind. The local page cache is still maintained, but data copied into it is immediately written to the server.
Do not perform the normal close-to-open consistency. When a file is closed, all modified data associated with the file is flushed to the server and not held on the client. When a file is opened the client sends a request to the server to validate the client's local caches. This behavior ensures a file's consistency across multiple NFS clients. When nocto is in effect, the client does not perform the flush on close and the request for validation, allowing the possibility of differences among copies of the same file as stored on multiple clients.
This option can be used where it can be guaranteed that accesses to a specified file system are made from only one client and only that client. Under such a condition, the effect of nocto can be a slight performance gain.
Bypass the memory mapping/locking check. Normally, the client checks for combinations of mmap(2) and fcntl(2) calls that could lead to file corruption. The nommaplockcheck option disables those checks. It should only be used when it can be guaranteed that mapped-file I/O, which involves whole pages whether or not the entire page is locked, will not conflict with byte-range locks held by other clients.
The server IP port number. The default is NFS_PORT. If the port option is specified, and if the resource includes one or more NFS URLs, and if any of the URLs include a port number, then the port number in the option and in the URL must be the same.
By default, the transport protocol that the NFS mount uses is the first available RDMA transport supported both by the client and the server. If no RDMA transport is found, then it attempts to use a TCP transport or, failing that, a UDP transport, as ordered in the /etc/netconfig file. If it does not find a connection oriented transport, it uses the first available connectionless transport.
Use this option to override the default behavior.
proto is set to the value of netid or rdma. netid is the value of the network_id field entry in the /etc/netconfig file.
The UDP protocol is not supported for NFS version 4. If you specify a UDP protocol with the proto option, NFS version 4 is not used.
The RDMA transport is only supported in global zones and kernel zones. It is not supported within non-global zones.
The public option forces the use of the public file handle when connecting to the NFS server. The resource specified might not have an NFS URL. See the discussion of URLs and the public option under NFS FILE SYSTEMS for a more detailed discussion.
Enable or prevent quota(8) to check whether the user is over quota on this file system; if the file system has quotas enabled on the server, quotas are still checked for operations on this file system.
Remounts a read-only file system as read-write (using the rw option). This option cannot be used with other –o options, and this option works only on currently mounted read-only file systems.
Set the number of NFS retransmissions to n. The default value is 5. For connection-oriented transports, this option has no effect because it is assumed that the transport performs retransmissions on behalf of NFS.
The number of times to retry the mount operation. The default for the mount command is 10000.
The default for the automounter is 0, in other words, do not retry. You might find it useful to increase this value on heavily loaded servers, where automounter traffic is dropped, causing unnecessary server not responding errors.
Set the read buffer size to a maximum of n bytes. The default value is 1048576 when using connection-oriented transports with version 3 or version 4 of the NFS protocol, and 32768 when using connection-less transports. The default can be negotiated down if the server prefers a smaller transfer size. “Read” operations may not necessarily use the maximum buffer size. When using version 2, the default value is 32768 for all transports. If a value lower than a system-defined minimum is specified, it is replaced by the minimum value (currently 1024). However, a server will be able to negotiate a transfer size that is smaller than the minimum.
Set the security mode for NFS transactions. If sec= is not specified, then the default action is to use AUTH_SYS over NFS version 2 mounts, use a user-configured default auth over NFS version 3 mounts, or to negotiate a mode over version 4 mounts.
The preferred mode for NFS version 3 mounts is the default mode specified in /etc/nfssec.conf (see nfssec.conf(5)) on the client. If there is no default configured in this file or if the server does not export using the client's default mode, then the client picks the first mode that it supports in the array of modes returned by the server. These alternatives are limited to the security flavors listed in /etc/nfssec.conf.
NFS version 4 mounts negotiate a security mode when the server returns an array of security modes. The client attempts the mount with each security mode, in order, until one is successful.
Only one mode can be specified with the sec= option. See nfssec(7) for the available mode options.
Sets the NFS timeout to n tenths of a second. This value is primarily useful for connectionless transports, where manual tuning may be useful to improve performance. The default value is 11 tenths of a second for connectionless transports.
This value has no effect when using an RDMA transport.
For connection-oriented transports, the default value is 600 tenths of a second. There is usually no need to change this value because the underlying transport will manage its own retransmissions. One exception is replicated file systems, where a smaller timeout can improve failover performance.
Specifies the version of the NFS protocol to use for mounting. The valid versions for this option are 2, 3, 4, 4.0, and 4.1.
If you do not specify this option, the version used between the client and the server is the highest version available on both the systems. If the NFS server does not support the client's default maximum, the next lower version is considered until a matching version is found.
The default maximum version for a client is 4 which can result in either 4.0 or 4.1 mounts depending on the server. This value can be changed by setting the client_versmax property. For more information see the sharectl(8) man page.
Sets the write buffer size to a maximum of n bytes. The default value is 1048576 when using connection-oriented transports with version 3 or version 4 of the NFS protocol, and 32768 when using connection-less transports. The default can be negotiated down if the server prefers a smaller transfer size. “Write” operations may not necessarily use the maximum buffer size. When using version 2, the default value is 32768 for all transports. If a value lower than a system-defined minimum is specified, it is replaced by the minimum value (currently 1024). However, a server will be able to negotiate a transfer size that is smaller than the minimum.
Allows or disallows the creation and manipulation of extended attributes. The default is xattr. See fsattr(7) for a description of extended attributes.
Overlays mount. Allows the file system to be mounted over an existing mount point, making the underlying file system inaccessible. If a mount is attempted on a pre-existing mount point without setting this flag, the mount fails, producing the error “device busy.”
By default, this option is not used during the mount. If the idmap mount option is not used, AUTH_SYS authentication is based on the equality between the client supplied UID/GID in the RPC credential and UID/GID stored in NFS server. In effect, it disables the nfsmapid functionality, which can make migration from legacy NFSv2/v3 systems to NFSv4 easier. NFS clients will automatically detect the servers which do not support numeric string uids and gids and automatically fall back to user@domain format.
You can turn off this behavior, that is, turning off numeric strings uids and gids support, by using the mount option idmap.
File systems that are mounted read-write or that contain executable files should always be mounted with the hard option. Applications using soft mounted file systems can incur unexpected I/O errors, file corruption, and unexpected program core dumps. The soft option is not recommended.
The server can require authenticated NFS requests from the client. See nfssec(7).
If the public option is specified, or if the resource includes an NFS URL, mount attempts to connect to the server using the public file handle lookup protocol. See WebNFS Client Specification, RFC 2054. If the server supports the public file handle, the attempt is successful; mount does not need to contact the server's rpcbind(8) daemon to get the port number of the mount server and contact the mountd(8) daemon to get the initial file handle of pathname. If the NFS client and server are separated by a firewall that allows all outbound connections through specific ports, such as NFS_PORT, then NFS operations are enabled through the firewall. The public option and the NFS URL can be specified independently or together. They interact as specified in the following matrix:
A Native path is a path name that is interpreted according to conventions used on the native operating system of the NFS server. A Canonical path is a path name that is interpreted according to the URL rules. See Uniform Resource Locators (URL), RFC 1738. See “Examples,” below, for uses of Native and Canonical paths.
resource can list multiple read−only file systems to be used to provide data. These file systems should contain equivalent directory structures and identical files. The file systems can be specified either with a comma−separated list of host:/pathname entries and/or NFS URL entries, or with a comma −separated list of hosts, if all file system names are the same. If multiple file systems are named and the first server in the list is down, failover uses the next alternate server to access files. If the read−only option is not chosen, replication is disabled. File access, for NFS versions 2 and 3, is blocked on the original if NFS locks are active for that file.
To improve NFS read performance, files and file attributes are cached. File modification times get updated whenever a write occurs. However, file access times can be temporarily out-of-date until the cache gets refreshed.
The attribute cache retains file attributes on the client. Attributes for a file are assigned a time to be flushed. If the file is modified before the flush time, then the flush time is extended by the time since the last modification (under the assumption that files that changed recently are likely to change soon). There is a minimum and maximum flush time extension for regular files and for directories. Setting actimeo=n sets flush time to n seconds for both regular files and directories.
Setting actimeo=0 disables attribute caching on the client. This means that every reference to attributes is satisfied directly from the server though file data is still cached. While this guarantees that the client always has the latest file attributes from the server, it has an adverse effect on performance through additional latency, network load, and server load.
Setting the noac option also disables attribute caching, but has the further effect of disabling client write caching. While this guarantees that data written by an application is written directly to a server, where it can be viewed immediately by other clients, it has a significant adverse effect on client write performance. Data written into memory-mapped file pages (mmap(2)) are not written directly to this server.
The attribute cache duration options are acdirmax, acdirmin, acregmax, acregmin, and actimeo, as described under OPTIONS. A value specified for actimeo sets the values of all attribute cache duration options except for any of these options specified following actimeo on a mount command line. For example, consider the following command:
example# mount -o acdirmax=10,actimeo=1000 server:/path /localpath
Because actimeo is the last duration option in the command line, its value (1000) becomes the setting for all of the duration options, including acdirmax. Now consider:
example# mount -o actimeo=1000,acdirmax=10 server:/path /localpath
Because the acdirmax option follows actimeo on the command line, it is assigned the value specified (10). The remaining duration options are set to the value of actimeo (1000).
To mount an NFS file system:
# mount serv:/usr/src /usr/src
This is an example of the use of a native path.Example 2 Mounting An NFS File System Read-Only With No suid Privileges
To mount an NFS file system read-only with no suid privileges:
# mount -r -o nosuid serv:/usr/src /usr/srcExample 3 Mounting An NFS File System Over Version 2, with the UDP Transport
To mount an NFS file system over version 2, with the UDP transport:
# mount -o vers=2,proto=udp serv:/usr/src /usr/srcExample 4 Mounting an NFS File System Using An NFS URL
To mount an NFS file system using an NFS URL (a canonical path):
# mount nfs://serv/usr/man /usr/manExample 5 Mounting An NFS File System Forcing Use Of The Public File Handle
To mount an NFS file system and force the use of the public file handle and an NFS URL (a canonical path) that has a non 7-bit ASCII escape sequence:
# mount -o public nfs://serv/usr/%A0abc /mnt/testExample 6 Mounting an NFS File System Using a Native Path
To mount an NFS file system using a native path (where the server uses colons (:) as the component separator) and the public file handle:
# mount -o public serv:C:doc:new /usr/docExample 7 Mounting a Replicated Set of NFS File Systems with the Same Pathnames
To mount a replicated set of NFS file systems with the same pathnames:
# mount serv−a,serv−b,serv−c:/usr/man /usr/manExample 8 Mounting a Replicated Set of NFS File Systems with Different Pathnames
To mount a replicated set of NFS file systems with different pathnames:
# mount serv−x:/usr/man,serv−y:/var/man,nfs://serv-z/man /usr/man
Table of mounted file systems
Default distributed file system type
Table of automatically mounted resources
See attributes(7) for descriptions of the following attributes:
mkdir(2), mmap(2), mount(2), open(2), umount(2), lofs(4FS), inet(4P), inet6(4P), mnttab(5), nfssec.conf(5), attributes(7), fsattr(7), nfssec(7), standards(7), lockd(8), mount(8), mountall(8), mountd(8), nfsd(8), quota(8), sharectl(8), statd(8)
Callaghan, Brent, WebNFS Client Specification, RFC 2054, October 1996.
Callaghan, Brent, NFS URL Scheme, RFC 2224, October 1997.
Berners-Lee, Masinter & McCahill, Uniform Resource Locators (URL), RFC 1738, December 1994.
An NFS server should not attempt to use NFS to mount the file systems it serves, unless they are provided by zfs(4FS). For an alternative to NFS mounts of file systems from the same host, see the lofs(4FS) man page.
If the directory on which a file system is to be mounted is a symbolic link, the file system is mounted on the directory to which the symbolic link refers, rather than being mounted on top of the symbolic link itself.
SunOS 4.x used the biod maintenance procedure to perform parallel read-ahead and write-behind on NFS clients. SunOS 5.x made biod obsolete with multi-threaded processing, which transparently performs parallel read-ahead and write-behind.
Since the root (/) file system is mounted read-only by the kernel during the boot process, only the remount option (and options that can be used in conjunction with remount) affect the root (/) entry in the /etc/vfstab file.