This chapter provides information on how to perform such NFS administration tasks as setting up NFS services, adding new file systems to share, mounting file systems, using the Secure NFS system, or using the WebNFS functionality. The last part of the chapter includes troubleshooting procedures and a list many of the NFS error messages and their meanings.
Your responsibilities as an NFS administrator depend on your site's requirements and the role of your computer on the network. You might be responsible for all the computers on your local network, in which case you might be responsible for determining these configuration items:
Which computers, if any, should be dedicated servers
Which computers should act as both servers and clients
Which computers should be clients only
Maintaining a server once it has been set up involves the following tasks:
Sharing and unsharing file systems as necessary
Modifying administrative files to update the lists of file systems your computer shares or mounts automatically
Checking the status of the network
Diagnosing and fixing NFS-related problems as they arise
Setting up maps for autofs
Remember, a computer can be both a server and a client--sharing local file systems with remote computers and mounting remote file systems.
Servers provide access to their file systems by sharing them over the NFS environment. You specify which file systems are to be shared with the share command and/or the /etc/dfs/dfstab file.
Entries in the /etc/dfs/dfstab file are shared automatically whenever you start NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. For example, if your computer is a server that supports diskless clients, you need to make your clients' root directories available at all times. Most file system sharing should be done automatically, the only time that manual sharing should occur is during testing or troubleshooting.
The dfstab file lists all the file systems that your server shares with its clients and controls which clients can mount a file system. If you want to modify dfstab to add or delete a file system or to modify the way sharing is done, simply edit the file with any supported text editor (such as vi). The next time the computer enters run level 3, the system reads the updated dfstab to determine which file systems should be shared automatically.
Each line in the dfstab file consists of a share command--the same command you would type at the command-line prompt to share the file system. The share command is located in /usr/sbin.
Edit the /etc/dfs/dfstab file.
Add one entry to the file for each file system that you want to have shared automatically. Each entry must be on a line by itself in the file and uses this syntax:
share [-F nfs] [-o specific-options] [-d description] pathname |
Check that the NFS service is running on the server.
If this is the first share command or set of share commands that you have initiated, it is likely that the NFS daemons are not running. The following commands kill the daemons and restart them.
# /etc/init.d/nfs.server stop # /etc/init.d/nfs.server start |
This ensures that NFS service is now running on the servers and will restart automatically when the server is at run level 3 during boot.
At this point, set up your autofs maps so that clients can access the file systems you've shared on the server. See "Setting Up Autofs".
There are several ways to mount file systems. They can be mounted automatically when the system is booted, on demand from the command line, or through the automounter. The automounter provides many advantages to mounting at boot time or mounting from the command line, but many situations require a combination of all three.
If you want to mount file systems at boot time instead of using autofs maps, follow this procedure. Although you must follow this procedure for all local file systems, it is not recommended for remote file systems because it must be completed on every client.
Edit the /etc/vfstab file.
Entries in the /etc/vfstab file have the following syntax:
special fsckdev mountp fstype fsckpass mount-at-boot mntopts
You want a client computer to mount the /var/mail directory on the server wasp. You would like it mounted as /var/mail on the client. You want the client to have read-write access. Add the following entry to the client's vfstab file.
wasp:/var/mail - /var/mail nfs - yes rw |
NFS servers should not have NFS vfstab entries because of a potential deadlock. The NFS service is started after the entries in /etc/vfstab are checked, so that if you have two servers fail at the same time that are mounting file systems from each other, each system could hang as the systems reboot.
To manually mount a file system during normal operation, run the mount command as superuser:
# mount -F nfs -o ro bee:/export/share/local /mnt |
In this case, the /export/share/local file system from the server bee is mounted on read-only /mnt on the local system. Mounting from the command line allows for temporary viewing of the file system. You can unmount the file system with umount or by rebooting the local host.
The version of the mount command released in Solaris 2.6 and in future patches will not warn about invalid options. The command silently ignores any options that cannot be interpreted. Make sure to verify all of the options that were used to prevent unexpected behavior.
In the Solaris 2.6 release and in any versions of the mount command that were patched after the 2.6 release, no warning is issued for invalid options. The following procedure helps determine whether the options that were supplied either on the command line or through /etc/vfstab were valid.
For this example, assume that the following command has been run:
# mount -F nfs -o ro,vers=2 bee:/export/share/local /mnt |
Run the nfsstat command to verify the options.
# nfsstat -m /mnt from bee:/export/share/local Flags: vers=2,proto=tcp,sec=sys,hard,intr,dynamic,acl,rsize=8192,wsize=8192, retrans=5 |
Note that the file system from bee has been mounted with the protocol version set to 2. Unfortunately the nfsstat command does not display information about all of the options, but using the nfsstat command is the most accurate way to verify the options.
Check the entry in /etc/mnttab.
The mount command does not allow invalid options to be added to the mount table, so verifying that the options listed in the file match those listed on the command line is a way to check those options not reported by the nfsstat command.
# grep bee /etc/mnttab bee:/export/share/local /mnt nfs ro,vers=2,dev=2b0005e 859934818 |
Chapter 5, About Autofs includes the specific instructions for establishing and supporting mounts with the automounter. Without any changes to the generic system, clients should be able to access remote file systems through the /net mount point. To mount the /export/share/local file system from the previous example, all you would need to do is:
% cd /net/bee/export/share/local |
Because the automounter allows all users to mount file systems, root access is not required. It also provides for automatic unmounting of file systems, so there is no need to unmount file systems after you are done.
This section discusses some of the tasks necessary to initialize or use NFS services.
To enable daemons without rebooting, become superuser and type the following command.
# /etc/init.d/nfs.server start |
This starts the daemons if there is an entry in /etc/dfs/dfstab.
To disable daemons without rebooting, become superuser and type the following command.
# /etc/init.d/nfs.server stop |
Check that no large files exist on the file system.
Here is an example of a command that you can run to locate large files:
# cd /export/home1 # find . -xdev -size +2000000 -exec ls -l {} \; |
If there are large files on the file system, you must remove or move them to another file system.
Unmount the file system.
# umount /export/home1 |
Reset the file system state if the file system has been mounted using -largefiles.
fsck resets the file system state if no large files exist on the file system:
# fsck /export/home1 |
Mount the file system using -nolargefiles.
# mount -F ufs -o nolargefiles /export/home1 |
You can do this from the command line, but to make the option more permanent, add an entry like the following into /etc/vfstab:
/dev/dsk/c0t3d0s1 /dev/rdsk/c0t3d0s1 /export/home1 ufs 2 yes nolargefiles |
Previous versions of the Solaris operating system cannot use large files. Check that clients of the NFS server are running at least version 2.6 if the clients need to access large files.
On the NFS client, mount the file system using the -ro option.
You can do this from the command line, through the automounter, or by adding an entry to /etc/vfstab that looks like:
bee,wasp:/export/share/local - /usr/local nfs - no -o ro |
This syntax has been allowed by the automounter in earlier releases, but the failover was not available while file systems were mounted, only when a server was being selected.
Servers that are running different versions of the NFS protocol can not be mixed using a command line or in a vfstab entry. Mixing servers supporting NFS V2 and V3 protocols can only be done with autofs, in which case the best subset of version 2 or version 3 servers is used.
Edit /etc/dfs/dfstab.
The first example allows mount access to all clients in the eng netgroup except the host named rose. The second example allows mount access to all clients in the eng.sun.com DNS domain except for rose.
share -F nfs -o ro=-rose:eng /export/share/man share -F nfs -o ro=-rose:.eng.sun.com /export/share/man |
For additional information on access lists, see "Setting Access Lists With the share Command".
Run the shareall command.
The NFS server will not use changes to /etc/dfs/dfstab until the file systems are shared again or until the server is rebooted.
# shareall |
To use the Secure NFS system, all the computers you are responsible for must have a domain name. A domain is an administrative entity, typically consisting of several computers, that joins a larger network. If you are running NIS+, you should also establish the NIS+ name service for the domain. See Solaris Naming Setup and Configuration Guide.
You can configure the Secure NFS environment to use either Diffie-Hellman or Kerberos Version 4 authentication or a combination of the two. The System Administration Guide discusses these authentication services.
Assign your domain a domain name, and make the domain name known to each computer in the domain.
See the Solaris Naming Administration Guide if you are using NIS+ as your name service.
Establish public keys and secret keys for your clients' users using the newkey or nisaddcred command, and have each user establish his or her own secure RPC password using the chkey command.
For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.
When public and secret keys have been generated, the public and encrypted secret
keys are stored in the publickey
database.
Verify that the name service is responding. If you are running NIS+, type the following:
# nisping -u Last updates for directory eng.acme.com. : Master server is eng-master.acme.com. Last update occurred at Mon Jun 5 11:16:10 1995 Replica server is eng1-replica-replica-58.acme.com. Last Update seen was Mon Jun 5 11:16:10 1995 |
If you are running NIS, verify that the ypbind daemon is running.
To verify that the keyserv daemon (the keyserver) is running, type the following:
# ps -ef | grep keyserv root 100 1 16 Apr 11 ? 0:00 /usr/sbin/keyserv root 2215 2211 5 09:57:28 pts/0 0:00 grep keyserv |
If the daemon isn't running, to start the keyserver, type the following:
# /usr/sbin/keyserv |
Run keylogin to decrypt and store the secret key.
Usually, the login password is identical to the network password. In this case, keylogin is not required. If the passwords are different, the users have to log in, and then do a keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.
You only need to run keylogin -r if the root secret key changes or /etc/.rootkey is lost.
Edit the /etc/dfs/dfstab file and add the -sec=dh option to the appropriate entries (for Diffie-Hellman authentication).
share -F nfs -o sec=dh /export/home |
Edit the auto_master
data to include -sec=dh as a mount option in the appropriate entries (for
Diffie-Hellman authentication):
/home auto_home -nosuid,sec=dh |
With Solaris 2.5 and earlier releases, if a client does not mount as secure a file system that is shared as secure, users have access as user nobody, rather than as themselves. With Version 2 on the Solaris 2.6 release, the NFS server will refuse access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode will be inherited from the NFS server, so there is no need for the clients to specify -sec=krb4 or -sec=dh. The users will have access to the files as themselves.
When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you don't establish new keys or change them for root. If you do delete /etc/.rootkey, you can always type:
# keylogin -r |
Edit the /etc/dfs/dfstab file and add the -sec=krb4 option to the appropriate entries.
# share -F nfs -o sec=krb4 /export/home |
Edit the auto_master
data to include -sec=krb4 as a mount option.
/home auto_home -nosuid,sec=krb4 |
With Solaris 2.5 and earlier releases, if a client does not mount as secure a file system that is shared as secure, users have access as user nobody, rather than as themselves. With Version 2 on the Solaris 2.6 release, the NFS server will refuse access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode will be inherited from the NFS server, so there is no need for the clients to specify -sec=krb4 or -sec=dh. The users will have access to the files as themselves.
This section provides instructions for administering the WebNFS system. The following tasks are discussed.
To use the WebNFS functionality, you first need an application capable of running and loading an NFS URL (for example, nfs://server/path). The next step is to choose the file system that will be exported for WebNFS access. If the application is web browsing, often the document root for the web server is used. Several factors need to be considered when choosing a file system to export for WebNFS access.
Each server has one public file handle that by default is associated with the server's root file system. The path in an NFS URL is evaluated relative to the directory with which the public file handle is associated. If the path leads to a file or directory within an exported file system, then the server provides access. You can use the -public option of the share command to associate the public file handle with a specific exported directory. Using this option allows URLs to be relative to the shared file system rather than to the servers' root file system. By default the public file handle points to the root file system, but this file handle does not allow web access unless the root file system is shared.
The WebNFS environment allows users who already have mount privileges to access files through a browser regardless of whether the file system is exported using the -public option. Because users already have access to these files through the NFS setup, this should not create any additional security risk. You only need to share a file system using the -public option if users who cannot mount the file system need to be able to use WebNFS access.
File systems that are already open to the public make good candidates for using the -public option, like the top directory in an ftp archive or the main URL directory for a web site.
You can use the -index option with the share command to force the loading of an HTML file instead of listing the directory when an NFS URL is accessed.
After a file system is chosen, review the files and set access permissions to restrict viewing of files or directories as needed. Establish the permissions as appropriate for any NFS file system that is being shared. For many sites, 755 permissions for directories and 644 permissions for files provides the correct level of access.
Additional factors need to be considered if both NFS and HTTP URLs are going to be used to access one Web site. These are described in "WebNFS Limitations With Web Browser Use".
By default in the 2.6 release, all file systems that are available for NFS mounting are automatically available for WebNFS access. The only time that this procedure needs to be followed is on servers that do not already allow NFS mounting, if resetting the public file handle is useful to shorten NFS URLs, or if the -index option is required.
Edit the /etc/dfs/dfstab file.
Add one entry to the file for the file system that you want to have shared automatically. The -index tag is optional.
share -F nfs -o ro,public,index=index.html /export/ftp |
Check that the NFS service is running on the server.
If this is the first share command or set of share commands that you have initiated, it is likely that the NFS daemons are not running. The following commands kill and restart the daemons.
# /etc/init.d/nfs.server stop # /etc/init.d/nfs.server start |
Share the file system.
Once the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS daemons were restarted in step 2, then this command does not need to be run because the script runs the command.
# shareall |
Verify that the information is correct.
Run the share command to check that the correct options are listed:
# share - /export/share/man ro "" - /usr/src rw=eng "" - /export/ftp ro,public,index=index.html "" |
Browsers capable of supporting WebNFS access should provide access using an NFS URL that looks something like:
nfs://server<:port>/path |
server is the name of the file server, port is the port number to use (the default value is 2049), and path is the path to the file. Path can either be relative to the public file handle or relative to the root file system on the server.
In most browsers, the URL service type (for example, nfs or http) is remembered from one transaction to the next, unless a URL that includes a different service type is loaded. When using NFS URLs, if a reference to a HTTP URL is loaded, then subsequent pages are loaded using the HTTP protocol instead of the NFS protocol, unless the URLs specify an NFS URL.
You can enable WebNFS access for clients that are not part of the local subnet by configuring the firewall to allow a TCP connection on port 2049. Simply allowing access for httpd does not allow NFS URLs to be used.
When tracking down an NFS problem, keep in mind that there are three main points of possible failure: the server, the client, and the network. The strategy outlined in this section tries to isolate each individual component to find the one that is not working. In all cases, the mountd and nfsd daemons must be running on the server for remote mounts to succeed.
The mountd and nfsd daemons start automatically at boot time only if there are NFS share entries in the /etc/dfs/dfstab file. Therefore, mountd and nfsd must be started manually when setting up sharing for the first time.
The -intr option is set by default for all mounts. If a program hangs with a "server not responding" message, you can kill it with the keyboard interrupt Control-c.
When the network or server has problems, programs that access hard-mounted remote files fail differently than those that access soft-mounted remote files. Hard-mounted remote file systems cause the client's kernel to retry the requests until the server responds again. Soft-mounted remote file systems cause the client's system calls to return an error after trying for a while. Because these errors can result in unexpected application errors and data corruption, avoid soft mounting.
When a file system is hard mounted, a program that tries to access it hangs if the server fails to respond. In this case, the NFS system displays the following message on the console:
NFS server hostname not responding still trying |
When the server finally responds, the following message appears on the console:
NFS server hostname ok |
A program accessing a soft-mounted file system whose server is not responding generates the following message:
NFS operation failed for server hostname: error # (error_message) |
Because of possible errors, do not soft-mount file systems with read-write data or file systems from which executables are run. Writable data could be corrupted if the application ignores the errors. Mounted executables might not load properly and can fail.
To determine where the NFS service has failed, you need to follow several procedures to isolate the failure. Check for the following items:
Can the client reach the server?
Can the client contact the NFS services on the server?
Are the NFS services running on the server?
In the process of checking these items, it might become apparent that other portions of the network are not functioning, such as the name service or the physical network hardware. The Solaris Naming Administration Guide contains debugging procedures for the NIS+ name service. Also, during the process it might become obvious that the problem isn't at the client end (for instance, if you get at least one trouble call from every subnet in your work area). In this case, it is much more timely to assume that the problem is the server or the network hardware near the server, and start the debugging process at the server, not at the client.
Check that the NFS server is reachable from the client. On the client, type the following command.
% /usr/sbin/ping bee bee is alive |
If the command reports that the server is alive, remotely check the NFS server (see "How to Remotely Check the NFS Server").
If the server is not reachable from the client, make sure that the local name service is running. For NIS+ clients type the following:
% /usr/lib/nis/nisping -u Last updates for directory eng.acme.com. : Master server is eng-master.acme.com. Last update occurred at Mon Jun 5 11:16:10 1995 Replica server is eng1-replica-58.acme.com. Last Update seen was Mon Jun 5 11:16:10 1995 |
If the name service is running, make sure that the client has received the correct host information by typing the following:
% /usr/bin/getent hosts bee 129.144.83.117 bee.eng.acme.com |
If the host information is correct, but the server is not reachable from the client, run the ping command from another client.
If the command run from a second client fails, see "How to Verify the NFS Service on the Server".
If the server is reachable from the second client, use ping to check connectivity of the first client to other systems on the local net.
If this fails, check the networking software configuration on the client (/etc/netmasks, /etc/nsswitch.conf, and so forth).
If the software is correct, check the networking hardware.
Try moving the client onto a second net drop.
Check that the NFS services have started on the NFS server by typing the following command:
% rpcinfo -s bee|egrep 'nfs|mountd' 100003 3,2 tcp,udp nfs superuser 100005 3,2,1 ticots,ticotsord,tcp,ticlts,udp mountd superuser |
If the daemons have not been started, see "How to Restart NFS Services".
Check that the server's nfsd processes are responding. On the client, type the following command.
% /usr/bin/rpcinfo -u bee nfs program 100003 version 2 ready and waiting program 100003 version 3 ready and waiting |
If the server is running, it prints a list of program and version numbers. Using the -t option tests the TCP connection. If this fails, skip to "How to Verify the NFS Service on the Server".
Check that the server's mountd is responding, by typing the following command.
% /usr/bin/rpcinfo -u bee mountd program 100005 version 1 ready and waiting program 100005 version 2 ready and waiting program 100005 version 3 ready and waiting |
Using the -t option tests the TCP connection. If either attempt fails, skip to "How to Verify the NFS Service on the Server".
Check the local autofs service if it is being used:
% cd /net/wasp |
Choose a /net or /home mount point that you know should work properly. If this doesn't work, then as root on the client, type the following to restart the autofs service:
# /etc/init.d/autofs stop # /etc/init.d/autofs start |
Verify that file system is shared as expected on the server.
% /usr/sbin/showmount -e bee /usr/src eng /export/share/man (everyone) |
Check the entry on the server and the local mount entry for errors. Also check the name space. In this instance, if the first client is not in the eng netgroup, then that client would not be able to mount the /usr/src file system.
Check all entries that include mounting informtion in all of the local files. The list includes /etc/vfstab and all the /etc/auto_* files.
Log on to the server as root.
Check that the server can reach the clients.
# ping lilac lilac is alive |
If the client is not reachable from the server, make sure that the local name service is running. For NIS+ clients type the following:
% /usr/lib/nis/nisping -u Last updates for directory eng.acme.com. : Master server is eng-master.acme.com. Last update occurred at Mon Jun 5 11:16:10 1995 Replica server is eng1-replica-58.acme.com. Last Update seen was Mon Jun 5 11:16:10 1995 |
If the name service is running, check the networking software configuration on the server (/etc/netmasks, /etc/nsswitch.conf, and so forth).
Type the following command to check whether the nfsd daemon is running.
# rpcinfo -u localhost nfs program 100003 version 2 ready and waiting program 100003 version 3 ready and waiting # ps -ef | grep nfsd root 232 1 0 Apr 07 ? 0:01 /usr/lib/nfs/nfsd -a 16 root 3127 2462 1 09:32:57 pts/3 0:00 grep nfsd |
Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service (see "How to Restart NFS Services").
Type the following command to check whether the mountd daemon is running.
# /usr/bin/rpcinfo -u localhost mountd program 100005 version 1 ready and waiting program 100005 version 2 ready and waiting program 100005 version 3 ready and waiting # ps -ef | grep mountd root 145 1 0 Apr 07 ? 21:57 /usr/lib/autofs/automountd root 234 1 0 Apr 07 ? 0:04 /usr/lib/nfs/mountd root 3084 2462 1 09:30:20 pts/3 0:00 grep mountd |
Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service (see "How to Restart NFS Services").
Type the following command to check whether the rpcbind daemon is running.
# /usr/bin/rpcinfo -u localhost rpcbind program 100000 version 1 ready and waiting program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting |
If rpcbind seems to be hung, either reboot the server or follow the steps in "How to Warm-Start rpcbind".
# /etc/init.d/nfs.server stop # /etc/init.d/nfs.server start |
This stops the daemons and restart them, if there is an entry in /etc/dfs/dfstab.
If the NFS server can not be rebooted because of work in progress, it is possible to restart rpcbind without having to restart all of the services that use RPC by completing a warm start as described in this procedure.
As root on the server, get the PID for rpcbind.
Run ps to get the PID (which is the value in the second column).
# ps -ef |grep rpcbind root 115 1 0 May 31 ? 0:14 /usr/sbin/rpcbind root 13000 6944 0 11:11:15 pts/3 0:00 grep rpcbind |
Send a SIGTERM signal to the rpcbind process.
In this example, term is the signal that is to be sent and 115 is the PID for the program (see the kill(1) man page). This causes rpcbind to create a list of the current registered services in /tmp/portmap.file and /tmp/rpcbind.file.
# kill -s term 115 |
If you do not kill the rpcbind process with the -s term option, then you cannot complete a warm start of rpcbind and will have to reboot the server to restore service.
Restart rpcbind.
Do a warm restart of the command so that the files created by the kill command are consulted, and the process resumes without requiring that all of the RPC services be restarted (see the rpcbind(1M) man page).
# /usr/sbin/rpcbind -w |
Run the nfsstat command with the -m option to gather current NFS information.
The name of the current server is printed after "currserver=".
% nfsstat -m /usr/local from bee,wasp:/export/share/local Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,synlink, acl,rsize=32768,wsize=32678,retrans=5 Failover: noresponse=0, failover=0, remap=0, currserver=bee |
Bad argument specified with index option - must be a file
You must include a file name with the -index option. You cannot use directory names.
Cannot establish NFS service over /dev/tcp: transport setup problem
This message is often created when the services information in the name space has not been updated. It can also be reported for UDP. To fix this problem, you must update the services data in the name space. For NIS+ the entries should be:
nfsd nfsd tcp 2049 NFS server daemon nfsd nfsd ucp 2049 NFS server daemon |
For NIS and /etc/services, the entries should be:
nfsd 2049/tcp nfs # NFS server daemon nfsd 2049/ucp nfs # NFS server daemon |
Cannot use index option without public option
Include the public option with the index option in the share command. You must define the public file handle for the -index option to work.
Releases prior to 2.6 required that the public file handle be set using the share command. Since the Solaris 2.6 release sets the public file handle to be / by default, this error message is no longer relevant.
NOTICE: NFS3: failing over from host1 to host2
This message is displayed on the console when a failover has occurred. It is an advisory message only.
filename: File too large
An NFS version 2 client is trying to access a file that is over 2 Gbytes.
mount: ... server not responding:RPC_PMAP_FAILURE - RPC_TIMED_OUT
The server sharing the file system you are trying to mount is down or unreachable, at the wrong run level, or its rpcbind is dead or hung.
mount: ... server not responding: RPC_PROG_NOT_REGISTERED
mount registered with rpcbind, but the NFS mount daemon mountd is not registered.
Either the remote directory or the local directory does not exist. Check the spelling of the directory names. Run ls on both directories.
mount: ...: Permission denied
Your computer name might not be in the list of clients or netgroup allowed access to the file system you want to mount. Use showmount -e to verify the access list.
nfs mount: ignoring invalid option "-option"
The -option flag is not valid. Refer to the mount_nfs(1M) man page to verify the required syntax.
This error message is not displayed when running the 2.6 version of the mount command or in earlier versions that have been patched.
nfs mount: NFS can't support "nolargefiles"
A Solaris 2.6 NFS client has attempted to mount a file system from an NFS server using the -nolargefiles option. This option is not supported for NFS file system types.
nfs mount: NFS V2 can't support "largefiles"
The NFS version 2 protocol cannot handle large files. You must use version 3 if access to large files is required.
NFS server hostname not responding still trying
If programs hang while doing file-related work, your NFS server might be dead. This message indicates that NFS server hostname is down or that there is a problem with the server or with the network. If failover is being used, then hostname is a list of servers. Start with "How to Check Connectivity on an NFS Client".
NFS fsstat failed for server hostname: RPC: Authentication error
This error can be caused by many situations. One of the most difficult to debug is when this occurs because a user is in too many groups. Currently a user can be in as many as 16 groups but no more if they are accessing files through NFS mounts. If a user must have the functionality of being in more than 16 groups and if Solaris 2.5 is running on the NFS server and the NFS clients, then use ACLs to provide the needed access privileges.
relicas must have the same version
For NFS failover to function properly, the NFS servers that are replicas must support the same version of the NFS protocol. Mixing version 2 and version 3 servers is not allowed.
replicated mounts must be read-only
NFS failover does not work on file systems that are mounted read-write. Mounting the file system read-write would increase the likelihood that a file will change. NFS failover depends on the file systems being identical.
replicated mounts must not be soft
Replicated mounts require that you wait for a timeout before failover occurs. The soft option requires that the mount fail immediately when a timeout starts, so you cannot include the -soft option with a replicated mount.
share_nfs: Cannot share more than one filesystem with 'public' option
Check the /etc/dfs/dfstab file to make sure that only one file system is selected to be shared with the -public option. Only one public file handle can be established per server, so only one file system per server can be shared with this option.
WARNING: No network locking on hostname:path: contact admin to install server change
An NFS client has unsuccessfully attempted to establish a connection with the network lock manager on an NFS server. Rather than fail the mount, this warning is generated to warn you that locking is not going to work.