This chapter provides information on how to perform such NFS administration tasks as setting up NFS services, adding new file systems to share, and mounting file systems. The chapter also covers the use of the Secure NFS system, and the use of WebNFS functionality. The last part of the chapter includes troubleshooting procedures and a list of some of the NFS error messages and their meanings.
Your responsibilities as an NFS administrator depend on your site's requirements and the role of your computer on the network. You might be responsible for all the computers on your local network, in which instance you might be responsible for determining these configuration items:
Which computers should be dedicated servers
Which computers should act as both servers and clients
Which computers should be clients only
Maintaining a server after it has been set up involves the following tasks:
Sharing and unsharing file systems as necessary
Modifying administrative files to update the lists of file systems your computer shares or mounts automatically
Checking the status of the network
Diagnosing and fixing NFS-related problems as they arise
Setting up maps for autofs
Remember, a computer can be both a server and a client—sharing local file systems with remote computers and mounting remote file systems.
Servers provide access to their file systems by sharing the file systems over the NFS environment. You specify which file systems are to be shared with the share command or the /etc/dfs/dfstab file.
Entries in the /etc/dfs/dfstab file are shared automatically whenever you start NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. For example, if your computer is a server that supports home directories, you need to make the home directories available at all times. Most file-system sharing should be done automatically. The only time that manual sharing should occur is during testing or troubleshooting.
The dfstab file lists all the file systems that your server shares with its clients. This file also controls which clients can mount a file system. You can modify dfstab to add or delete a file system or change the way sharing is done. Just edit the file with any text editor that is supported (such as vi). The next time that the computer enters run level 3, the system reads the updated dfstab to determine which file systems should be shared automatically.
Each line in the dfstab file consists of a share command—the same command that you type at the command-line prompt to share the file system. The share command is located in /usr/sbin.
Table 14–1 File-System Sharing Task Map
Task |
Description |
For Instructions |
---|---|---|
Establish automatic file-system sharing |
Steps to configure a server so that file systems are automatically shared when the server is rebooted | |
Enable WebNFS |
Steps to configure a server so that users can access files by using WebNFS | |
Enable NFS server logging |
Steps to configure a server so that NFS logging is run on selected file systems |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Add entries for each file system to be shared.
Edit /etc/dfs/dfstab. Add one entry to the file for every file system that you want to be automatically shared. Each entry must be on a line by itself in the file and use this syntax:
share [-F nfs] [-o specific-options] [-d description] pathname |
See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.
Check if the NFS service is running on the server.
If this is the first share command or set of share commands that you have initiated, the NFS service might not be running. Check that one of the NFS daemons is running by using the following command.
# pgrep nfsd 318 |
318 is the process ID for nfsd in this example. If an ID is not displayed, then the service is not running. The second daemon to check for is mountd.
(Optional) Start the NFS service.
If the previous step does not report a process ID for nfsd, start the NFS service by using the following command.
# /etc/init.d/nfs.server start |
This command ensures that NFS service is now running on the servers and restarts automatically when the server is at run level 3 during boot.
(Optional) Share the file system.
After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS service was started earlier, this command does not need to be run because the init script runs the command.
# shareall |
Verify that the information is correct.
Run the share command to check that the correct options are listed:
# share - /export/share/man ro "" - /usr/src rw=eng "" - /export/ftp ro,public "" |
The next step is to set up your autofs maps so that clients can access the file systems that you have shared on the server. See Task Overview for Autofs Administration.
Starting with the 2.6 release, by default all file systems that are available for NFS mounting are automatically available for WebNFS access. The only condition that requires the use of this procedure is one of the following:
To allow NFS mounting on a server that does not already allow NFS mounting
To reset the public file handle to shorten NFS URLs by using the public option
To force the loading of a specific html file by using the index option
See Planning for WebNFS Access for a list of issues that you should consider before starting the WebNFS service.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Add entries for each file system to be shared by using the WebNFS service.
Edit /etc/dfs/dfstab. Add one entry to the file for every file system. The public and index tags that are shown in the following example are optional.
share -F nfs -o ro,public,index=index.html /export/ftp |
See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.
Check if the NFS service is running on the server.
If this is the first share command or set of share commands that you have initiated, the NFS daemons might not be running. Check that one of the NFS daemons is running by using the following command.
# pgrep nfsd 318 |
318 is the process ID for nfsd in this example. If an ID is not displayed, then the service is not running. The second daemon to check for is mountd.
(Optional) Start the NFS service.
If the previous step does not report a process ID for nfsd, start the NFS service by using the following command.
# /etc/init.d/nfs.server start |
This command ensures that NFS service is now running on the servers and restarts automatically when the server is at run level 3 during boot.
(Optional) Share the file system.
After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS service was started earlier, this command does not need to be run because the script runs the command.
# shareall |
Verify that the information is correct.
Run the share command to check that the correct options are listed:
# share - /export/share/man ro "" - /usr/src rw=eng "" - /export/ftp ro,public,index=index.html "" |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
(Optional) Change file-system configuration settings.
In /etc/nfs/nfslog.conf, you can change the settings in one of two ways. You can edit the default settings for all file systems by changing the data that is associated with the global tag. Alternately, you can add a new tag for this file system. If these changes are not needed, you do not need to change this file. The format of /etc/nfs/nfslog.conf is described in nfslog.conf(4).
Add entries for each file system to be shared by using NFS server logging.
Edit /etc/dfs/dfstab. Add one entry to the file for the file system on which you are enabling NFS server logging. The tag that is used with the log=tag option must be entered in /etc/nfs/nfslog.conf. This example uses the default settings in the global tag.
share -F nfs -o ro,log=global /export/ftp |
See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.
Check if the NFS service is running on the server.
If this is the first share command or set of share commands that you have initiated, the NFS daemons might not be running. Check that one of the NFS daemons is running by using the following command.
# pgrep nfsd 318 |
318 is the process ID for nfsd in this example. If an ID is not displayed, then the service is not running. The second daemon to check for is mountd.
(Optional) Start the NFS service.
If the previous step does not report a process ID for nfsd, start the NFS service by using the following command.
# /etc/init.d/nfs.server start |
This command ensures that NFS service is now running on the servers and restarts automatically when the server is at run level 3 during boot.
(Optional) Share the file system.
After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS service was started earlier, this command does not need to be run because the script runs the command.
# shareall |
Verify that the information is correct.
Run the share command to check that the correct options are listed:
# share - /export/share/man ro "" - /usr/src rw=eng "" - /export/ftp ro,log=global "" |
(Optional) Start the NFS log daemon, nfslogd, if it is not running already.
Restarting the NFS daemons by using the nfs.server script starts the daemon if the nfslog.conf file exists. Otherwise, the command needs to be run once by hand to create the files so that the command automatically restarts when the server is rebooted.
# /usr/lib/nfs/nfslogd |
You can mount file systems in several ways. File systems can be mounted automatically when the system is booted, on demand from the command line, or through the automounter. The automounter provides many advantages to mounting at boot time or mounting from the command line. However, many situations require a combination of all three methods. Additionally, several ways of enabling or disabling processes exist, depending on the options you use when mounting the file system. See the following table for a complete list of the tasks that are associated with file-system mounting.
Table 14–2 Task Map for Mounting File Systems
Task |
Description |
For Instructions |
---|---|---|
Mount a file system at boot time | Steps so that a file system is mounted whenever a system is rebooted. | How to Mount a File System at Boot Time |
Mount a file system by using a command | Steps to mount a file system when a system is running. This procedure is useful when testing. | How to Mount a File System From the Command Line |
Mount with the automounter | Steps to access a file system on demand without using the command line. | Mounting With the Automounter |
Prevent large files | Steps to prevent large files from being created on a file system. | How to Disable Large Files on an NFS Server |
Start client-side failover | Steps to enable the automatic switchover to a working file system if a server fails. | How to Use Client-Side Failover |
Disable mount access for a client | Steps to disable the ability of one client to access a remote file system. | How to Disable Mount Access for One Client |
Provide access to a file system through a firewall | Steps to allow access to a file system through a firewall by using the WebNFS protocol. | How to Mount an NFS File System Through a Firewall |
Mount a file system by using an NFS URL | Steps to allow access to a file system by using an NFS URL. This process allows for file-system access without using the MOUNT protocol. | How to Mount an NFS File System Using an NFS URL |
If you want to mount file systems at boot time instead of using autofs maps, follow this procedure. This procedure must be completed on every client for remote file systems.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Add an entry for the file system to /etc/vfstab.
Entries in the /etc/vfstab file have the following syntax:
special fsckdev mountp fstype fsckpass mount-at-boot mntopts
See the vfstab(4) man page for more information.
NFS servers should not have NFS vfstab entries because of a potential deadlock. The NFS service is started after the entries in /etc/vfstab are checked. Consider the following. If two servers that are mounting file systems from each other fail at the same time, each system could hang as the systems reboot.
You want a client machine to mount the /var/mail directory from the server wasp. You want the file system to be mounted as /var/mail on the client and you want the client to have read-write access. Add the following entry to the client's vfstab file.
wasp:/var/mail - /var/mail nfs - yes rw |
Mounting a file system from the command line is often done to test a new mount point. This type of mount allows for temporary access to a file system that is not available through the automounter.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Mount the file system.
# mount -F nfs -o ro bee:/export/share/local /mnt |
In this instance, the /export/share/local file system from the server bee is mounted on read-only /mnt on the local system. Mounting from the command line allows for temporary viewing of the file system. You can unmount the file system with umount or by rebooting the local host.
Starting with the 2.6 release, all versions of the mount command do not warn about invalid options. The command silently ignores any options that cannot be interpreted. To prevent unexpected behavior, ensure that you verify all of the options that were used.
Task Overview for Autofs Administration includes the specific instructions for establishing and supporting mounts with the automounter. Without any changes to the generic system, clients should be able to access remote file systems through the /net mount point. To mount the /export/share/local file system from the previous example, you need to type the following:
% cd /net/bee/export/share/local |
Because the automounter allows all users to mount file systems, root access is not required. The automounter also provides for automatic unmounting of file systems, so you do not need to unmount file systems after you are finished.
For servers that are supporting clients that cannot handle a file over 2 GBytes, you might need to disable the ability to create large files.
Versions prior to 2.6 of the Solaris operating environment cannot use large files. If the clients need to access large files, check that the clients of the NFS server are running, at minimum, the 2.6 release.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Check that no large files exist on the file system.
Here is an example of a command that you can run to locate large files:
# cd /export/home1 # find . -xdev -size +2000000 -exec ls -l {} \; |
If large files are on the file system, you must remove or move these files to another file system.
Unmount the file system.
# umount /export/home1 |
Reset the file system state if the file system has been mounted by using largefiles.
fsck resets the file system state if no large files exist on the file system:
# fsck /export/home1 |
Mount the file system by using nolargefiles.
# mount -F ufs -o nolargefiles /export/home1 |
You can mount from the command line, but to make the option more permanent, add an entry that resembles the following into /etc/vfstab:
/dev/dsk/c0t3d0s1 /dev/rdsk/c0t3d0s1 /export/home1 ufs 2 yes nolargefiles |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
On the NFS client, mount the file system by using the ro option.
You can mount from the command line, through the automounter, or by adding an entry to /etc/vfstab that resembles the following:
bee,wasp:/export/share/local - /usr/local nfs - no ro |
This syntax has been allowed by the automounter. However, the failover was not available while file systems were mounted, only when a server was being selected.
Servers that are running different versions of the NFS protocol cannot be mixed by using a command line or in a vfstab entry. Mixing servers that support NFS version 2 or version 3 protocols can only be done with autofs. In autofs, the best subset of version 2 or version 3 servers is used.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Add an entry in /etc/dfs/dfstab.
The first example allows mount access to all clients in the eng netgroup except the host that is named rose. The second example allows mount access to all clients in the eng.example.com DNS domain except for rose.
share -F nfs -o ro=-rose:eng /export/share/man share -F nfs -o ro=-rose:.eng.example.com /export/share/man |
For additional information on access lists, see Setting Access Lists With the share Command. For a description of /etc/dfs/dfstab, see dfstab(4).
Share the file system.
The NFS server does not use changes to /etc/dfs/dfstab until the file systems are shared again or until the server is rebooted.
# shareall |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Manually mount the file system by using a command such as the following:
# mount -F nfs bee:/export/share/local /mnt |
In this example, the file system /export/share/local is mounted on the local client by using the public file handle. An NFS URL can be used instead of the standard path name. If the public file handle is not supported by the server bee, the mount operation fails.
This procedure requires that the file system on the NFS server be shared by using the public option. Additionally, any firewalls between the client and the server must allow TCP connections on port 2049. Starting with the 2.6 release, all file systems that are shared allow for public file handle access, so the public option is applied by default.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
(Optional) If you are using NFS version 2 or version 3, manually mount the file system by using a command such as the following:
# mount -F nfs nfs://bee:3000/export/share/local /mnt |
In this example, the /export/share/local file system is being mounted from the server bee by using NFS port number 3000. The port number is not required and by default the standard NFS port number of 2049 is used. You can choose to include the public option with an NFS URL. Without the public option, the MOUNT protocol is used if the public file handle is not supported by the server. The public option forces the use of the public file handle, and the mount fails if the public file handle is not supported.
This section describes some of the tasks necessary to do the following:
Start and stop the NFS server
Start and stop the automounter
Task |
Description |
For Instructions |
---|---|---|
Start the NFS server |
Steps to start the NFS service if it has not been started automatically. | |
Stop the NFS server |
Steps to stop the NFS service. Normally the service should not need to be stopped. | |
Start the automounter |
Steps to start the automounter. This procedure is required when some of the automounter maps are changed. | |
Stop the automounter |
Steps to stop the automounter. This procedure is required when some of the automounter maps are changed. |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Enable the NFS service daemons.
# /etc/init.d/nfs.server start |
This command starts the daemons if an entry is in /etc/dfs/dfstab.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Disable the NFS service daemons.
# /etc/init.d/nfs.server stop |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Enable the autofs daemon.
# /etc/init.d/autofs start |
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Disable the autofs daemon.
# /etc/init.d/autofs stop |
To use the Secure NFS system, all the computers that you are responsible for must have a domain name. A domain is an administrative entity—typically, several computers—that is part of a larger network. If you are running a name service, you should also establish the name service for the domain. See System Administration Guide: Naming and Directory Services (FNS and NIS+).
Kerberos V5 authentication is supported by the NFS service. “Introduction to SEAM” in System Administration Guide: Security Services discusses the Kerberos service.
You can also configure the Secure NFS environment to use Diffie-Hellman authentication. “Using Authentication Services (Tasks)” in System Administration Guide: Security Services discusses this authentication service.
Assign your domain a domain name, and make the domain name known to each computer in the domain.
See the System Administration Guide: Naming and Directory Services (FNS and NIS+) if you are using NIS+ as your name service.
Establish public keys and secret keys for your clients' users by using the newkey or nisaddcred command. Have each user establish his or her own secure RPC password by using the chkey command.
For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.
When public keys and secret keys have been generated, the public keys
and encrypted secret keys are stored in the publickey
database.
Verify that the name service is responding. If you are running NIS+, type the following:
# nisping -u Last updates for directory eng.acme.com. : Master server is eng-master.acme.com. Last update occurred at Mon Jun 5 11:16:10 1995 Replica server is eng1-replica-replica-58.acme.com. Last Update seen was Mon Jun 5 11:16:10 1995 |
If you are running NIS, verify that the ypbind daemon is running.
Verify that the keyserv daemon of the key server is running.
Type the following command.
# ps -ef | grep keyserv root 100 1 16 Apr 11 ? 0:00 /usr/sbin/keyserv root 2215 2211 5 09:57:28 pts/0 0:00 grep keyserv |
If the daemon isn't running, start the key server by typing the following:
# /usr/sbin/keyserv |
Decrypt and store the secret key.
Usually, the login password is identical to the network password. In this situation, keylogin is not required. If the passwords are different, the users have to log in, and then do a keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.
You need to run keylogin -r if the root secret key changes or /etc/.rootkey is lost.
Update mount options for the file system.
For Diffie-Hellman authentication, edit the /etc/dfs/dfstab file and add the sec=dh option to the appropriate entries.
share -F nfs -o sec=dh /export/home |
See the dfstab(4) man page for a description of /etc/dfs/dfstab.
Update the automounter maps for the file system.
Edit the auto_master
data
to include sec=dh as a mount option in the
appropriate entries for Diffie-Hellman authentication:
/home auto_home -nosuid,sec=dh |
Releases through Solaris 2.5 have a limitation. If a client does not securely mount a shared file system that is secure, users have access as nobody, rather than as themselves. For subsequent releases that use version 2, the NFS server refuses access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode is inherited from the NFS server, so clients do not need to specify sec=dh. The users have access to the files as themselves.
When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you do not establish new keys or change the keys for root. If you do delete /etc/.rootkey, you can always type the following:
# keylogin -r |
This section provides instructions for administering the WebNFS system. Related tasks follow.
Table 14–4 Task Map for WebNFS Administration
Task |
Description |
For Instructions |
---|---|---|
Plan for WebNFS | Issues to consider before enabling the WebNFS service. | Planning for WebNFS Access |
Enable WebNFS | Steps to enable mounting of an NFS file system by using the WebNFS protocol. | How to Enable WebNFS Access |
Enable WebNFS through a firewall | Steps to allow access to files through a firewall by using the WebNFS protocol. | How to Enable WebNFS Access Through a Firewall |
Browse by using an NFS URL | Instructions for using an NFS URL within a web browser. | How to Browse Using an NFS URL |
Use a public file handle with autofs | Steps to force use of the public file handle when mounting a file system with the automounter. | How to Use a Public File Handle With Autofs |
Use an NFS URL with autofs | Steps to add an NFS URL to the automounter maps. | How to Use NFS URLs With Autofs |
Provide access to a file system through a firewall | Steps to allow access to a file system through a firewall by using the WebNFS protocol. | How to Mount an NFS File System Through a Firewall |
Mount a file system by using an NFS URL | Steps to allow access to a file system by using an NFS URL. This process allows for file system access without using the MOUNT protocol. | How to Mount an NFS File System Using an NFS URL |
To use WebNFS, you first need an application that is capable of running and loading an NFS URL (for example, nfs://server/path). The next step is to choose the file system that can be exported for WebNFS access. If the application is web browsing, often the document root for the web server is used. You need to consider several factors when choosing a file system to export for WebNFS access.
Each server has one public file handle that by default is associated with the server's root file system. The path in an NFS URL is evaluated relative to the directory with which the public file handle is associated. If the path leads to a file or directory within an exported file system, the server provides access. You can use the public option of the share command to associate the public file handle with a specific exported directory. Using this option allows URLs to be relative to the shared file system rather than to the server's root file system. The root file system does not allow web access unless the root file system is shared.
The WebNFS environment enables users who already have mount privileges to access files through a browser. This capability is enabled regardless of whether the file system is exported by using the public option. Because users already have access to these files through the NFS setup, this access should not create any additional security risk. You only need to share a file system by using the public option if users who cannot mount the file system need to use WebNFS access.
File systems that are already open to the public make good candidates for using the public option. Some examples are the top directory in an ftp archive or the main URL directory for a web site.
You can use the index option with the share command to force the loading of an HTML file. Otherwise, you can list the directory when an NFS URL is accessed.
After a file system is chosen, review the files and set access permissions to restrict viewing of files or directories, as needed. Establish the permissions, as appropriate, for any NFS file system that is being shared. For many sites, 755 permissions for directories and 644 permissions for files provide the correct level of access.
You need to consider additional factors if both NFS and HTTP URLs are to be used to access one web site. These factors are described in WebNFS Limitations With Web Browser Use.
Browsers that are capable of supporting the WebNFS service should provide access to an NFS URL that resembles the following:
nfs://server<:port>/path |
Name of the file server
Port number to use (2049, default value)
Path to file, which can be relative to the public file handle or to the root file system
In most browsers, the URL service type (for example, nfs or http) is remembered from one transaction to the next. The exception occurs when a URL that includes a different service type is loaded. After you use an NFS URL, a reference to an HTTP URL might be loaded. If so, subsequent pages are loaded by using the HTTP protocol instead of the NFS protocol.
You can enable WebNFS access for clients that are not part of the local subnet by configuring the firewall to allow a TCP connection on port 2049. Just allowing access for httpd does not allow NFS URLs to be used.
This section describes some of the most common tasks you might encounter in your own environment. Recommended procedures are included for each scenario to help you configure autofs to best meet your clients' needs. To perform the tasks that are discussed in this section, use the Solaris Management Console tools or see the System Administration Guide: Naming and Directory Services (FNS and NIS+) .
The following table provides a description and a pointer to many of the tasks that are related to autofs.
Table 14–5 Task Map for Autofs Administration
Task |
Description |
For Instructions |
---|---|---|
Start autofs | Start the automount service without having to reboot the system | How to Start the Automounter |
Stop autofs | Stop the automount service without disabling other network services | How to Stop the Automounter |
Access file systems by using autofs | Access file systems by using the automount service | Mounting With the Automounter |
Modify the autofs maps | Steps to modify the master map, which should be used to list other maps | How to Modify the Master Map |
Steps to modify an indirect map, which should be used for most maps | How to Modify Indirect Maps | |
Steps to modify a direct map, which should be used when a direct association between a mount point on a client and a server is required | How to Modify Direct Maps | |
Modify the autofs maps to access non-NFS file systems | Steps to set up an autofs map with an entry for a CD-ROM application | How to Access CD-ROM Applications With Autofs |
Steps to set up an autofs map with an entry for a PC-DOS diskette | How to Access PC-DOS Data Diskettes With Autofs | |
Steps to use autofs to access a CacheFS file system | How to Access NFS File Systems Using CacheFS | |
Using /home | Example of how to set up a common /home map | Setting Up a Common View of /home |
Steps to set up a /home map that refers to multiple file systems | How to Set Up /home With Multiple Home Directory File Systems | |
Using a new autofs mount point | Steps to set up a project-related autofs map | How to Consolidate Project-Related Files Under /ws |
Steps to set up an autofs map that supports different client architectures | How to Set Up Different Architectures to Access a Shared Namespace | |
Steps to set up an autofs map that supports different operating systems | How to Support Incompatible Client Operating System Versions | |
Replicate file systems with autofs | Provide access to file systems that fail over | How to Replicate Shared Files Across Several Servers |
Using security restrictions with autofs | Provide access to file systems while restricting remote root access to the files | How to Apply Autofs Security Restrictions |
Using a public file handle with autofs | Force use of the public file handle when mounting a file system | How to Use a Public File Handle With Autofs |
Using an NFS URL with autofs | Add an NFS URL so that the automounter can use it | How to Use NFS URLs With Autofs |
Disable autofs browsability | Steps to disable browsability so that autofs mount points are not automatically populated on a single client | How to Completely Disable Autofs Browsability on a Single NFS Client |
Steps to disable browsability so that autofs mount points are not automatically populated on all clients | How to Disable Autofs Browsability for All Clients | |
Steps to disable browsability so that a specific autofs mount point is not automatically populated on a client | How to Disable Autofs Browsability on a Selected File System |
The following tables describe several of the factors you need to be aware of when administering autofs maps. Which type of map and which name service you choose change the mechanism that you need to use to make changes to the autofs maps.
The following table describes the types of maps and their uses.
Table 14–6 Types of autofs Maps and Their Uses
Type of Map |
Use |
---|---|
Associates a directory with a map |
|
Directs autofs to specific file systems |
|
Directs autofs to reference-oriented file systems |
The following table describes how to make changes to your autofs environment that are based on your name service.
Table 14–7 Map Maintenance
Name Service |
Method |
---|---|
Local files | |
NIS | |
NIS+ |
The next table tells you when to run the automount command, depending on the modification you have made to the type of map. For example, if you have made an addition or a deletion to a direct map, you need to run the automount command on the local system. By running the command, you make the change effective. However, if you have modified an existing entry, you do not need to run the automount command for the change to become effective.
Table 14–8 When to Run the automount Command
Type of Map |
Restart automount? |
|
---|---|---|
|
Addition or Deletion |
Modification |
Y |
Y |
|
Y |
N |
|
N |
N |
The following procedures require that you use NIS+ as your name service.
Log in as a user who has permissions to change the maps.
Using the nistbladm command, make your changes to the master map.
See the System Administration Guide: Naming and Directory Services (FNS and NIS+).
For each client, become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
For each client, run the automount command to ensure that your changes become effective.
Notify your users of the changes.
Notification is required so that the users can also run the automount command as superuser on their own computers.
The automount command gathers information from the master map whenever it is run.
Log in as a user who has permissions to change the maps.
Using the nistbladm command, make your changes to the indirect map.
See the System Administration Guide: Naming and Directory Services (FNS and NIS+).
The change becomes effective the next time that the map is used, which is the next time a mount is done.
Log in as a user who has permissions to change the maps.
Using the nistbladm command, add or delete your changes to the direct map.
See the System Administration Guide: Naming and Directory Services (FNS and NIS+).
If you added or deleted a mount-point entry in step 1, run the automount command.
Notify your users of the changes.
Notification is required so that the users can also run the automount command as superuser on their own computers.
If you only modify or change the contents of an existing direct map entry, you do not need to run the automount command.
For example, suppose you modify the auto_direct map so that the /usr/src directory is now mounted from a different server. If /usr/src is not mounted at this time, the new entry becomes effective immediately when you try to access /usr/src. If /usr/src is mounted now, you can wait until the auto-unmounting occurs, then access the file.
Use indirect maps whenever possible. Indirect maps are easier to construct and less demanding on the computers' file systems. Also, indirect maps do not occupy as much space in the mount table as direct maps.
If you have a local disk partition that is mounted on /src and you plan to use the autofs service to mount other source directories, you might encounter a problem. If you specify the mount point /src, the NFS service hides the local partition whenever you try to reach it.
You need to mount the partition in some other location, for example, on /export/src. You then need an entry in /etc/vfstab such as the following:
/dev/dsk/d0t3d0s5 /dev/rdsk/c0t3d0s5 /export/src ufs 3 yes - |
You also need this entry in auto_src:
terra terra:/export/src |
terra is the name of the computer.
Autofs can also mount files other than NFS files. Autofs mounts files on removable media, such as diskettes or CD-ROM. Normally, you would mount files on removable media by using the Volume Manager. The following examples show how this mounting could be accomplished through autofs. The Volume Manager and autofs do not work together, so these entries would not be used without first deactivating the Volume Manager.
Instead of mounting a file system from a server, you put the media in the drive and reference the file system from the map. If you plan to access non-NFS file systems and you are using autofs, see the following procedures.
Use this procedure if you are not using Volume Manager.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Update the autofs map.
Add an entry for the CD-ROM file system, which should resemble the following:
hsfs -fstype=hsfs,ro :/dev/sr0 |
The CD-ROM device that you intend to mount must appear as a name that follows the colon.
Use this procedure if you are not using Volume Manager.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Update the autofs map.
Add an entry for the diskette file system such as the following:
pcfs -fstype=pcfs :/dev/diskette |
The cache file system (CacheFS) is a generic nonvolatile caching mechanism. CacheFS improves the performance of certain file systems by utilizing a small, fast local disk.
You can improve the performance of the NFS environment by using CacheFS to cache data from an NFS file system on a local disk.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Run the cfsadmin command to create a cache directory on the local disk.
# cfsadmin -c /var/cache |
Add the cachefs entry to the appropriate automounter map.
For example, adding this entry to the master map caches all home directories:
/home auto_home -fstype=cachefs,cachedir=/var/cache,backfstype=nfs |
Adding this entry to the auto_home map only caches the home directory for the user who is named rich:
rich -fstype=cachefs,cachedir=/var/cache,backfstype=nfs dragon:/export/home1/rich |
Options that are included in maps that are searched later override options set in maps that are searched earlier. The last options that are found are the ones that are used. In the previous example, an additional entry to the auto_home map only needs to include the options in the master maps if some options required changes.
You can set up the automounter maps in several ways. The following tasks give details on how to customize the automounter maps to provide an easy-to-use directory structure.
The ideal is for all network users to be able to locate their own or anyone's home directory under /home. This view should be common across all computers, whether client or server.
Every Solaris installation comes with a master map: /etc/auto_master.
# Master map for autofs # +auto_master /net -hosts -nosuid,nobrowse /home auto_home -nobrowse |
A map for auto_home is also installed under /etc.
# Home directory map for autofs # +auto_home |
Except for a reference to an external auto_home map, this map is empty. If the directories under /home are to be common to all computers, do not modify this /etc/auto_home map. All home directory entries should appear in the name service files, either NIS or NIS+.
Users should not be permitted to run setuid executables from their home directories. Without this restriction, any user could have superuser privileges on any computer.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Install home directory partitions under /export/home.
If the system has several partitions, install the partitions under separate directories, for example, /export/home1, /export/home2, and so on.
Use the Solaris Management Console tools to create and maintain the auto_home map.
Whenever you create a new user account, type the location of the user's home directory in the auto_home map. Map entries can be simple, for example:
rusty dragon:/export/home1/& gwenda dragon:/export/home1/& charles sundog:/export/home2/& rich dragon:/export/home3/& |
Notice the use of the & (ampersand) to substitute the map key. The ampersand is an abbreviation for the second occurrence of rusty in the following example.
rusty dragon:/export/home1/rusty |
With the auto_home map in place, users can refer to any home directory (including their own) with the path /home/user. user is their login name and the key in the map. This common view of all home directories is valuable when logging in to another user's computer. Autofs mounts your home directory for you. Similarly, if you run a remote windowing system client on another computer, the client program has the same view of the /home directory.
This common view also extends to the server. Using the previous example, if rusty logs in to the server dragon, autofs there provides direct access to the local disk by loopback-mounting /export/home1/rusty onto /home/rusty.
Users do not need to be aware of the real location of their home directories. If rusty needs more disk space and needs to have his home directory relocated to another server, you need only change rusty's entry in the auto_home map to reflect the new location. Other users can continue to use the /home/rusty path.
Assume you are the administrator of a large software development project. You plan to make all project-related files available under a directory that is called /ws. This directory is to be common across all workstations at the site.
Add an entry for the /ws directory to the site auto_master map, either NIS or NIS+.
/ws auto_ws -nosuid |
The auto_ws map determines the contents of the /ws directory.
Add the -nosuid option as a precaution.
This option prevents users from running setuid programs that might exist in any workspaces.
Add entries to the auto_ws map.
The auto_ws map is organized so that each entry describes a subproject. Your first attempt yields a map that resembles the following:
compiler alpha:/export/ws/& windows alpha:/export/ws/& files bravo:/export/ws/& drivers alpha:/export/ws/& man bravo:/export/ws/& tools delta:/export/ws/& |
The ampersand (&) at the end of each entry is an abbreviation for the entry key. For instance, the first entry is equivalent to the following:
compiler alpha:/export/ws/compiler |
This first attempt provides a map that appears simple, but the map is inadequate. The project organizer decides that the documentation in the man entry should be provided as a subdirectory under each subproject. Also, each subproject requires subdirectories to describe several versions of the software. You must assign each of these subdirectories to an entire disk partition on the server.
Modify the entries in the map as follows:
compiler \ /vers1.0 alpha:/export/ws/&/vers1.0 \ /vers2.0 bravo:/export/ws/&/vers2.0 \ /man bravo:/export/ws/&/man windows \ /vers1.0 alpha:/export/ws/&/vers1.0 \ /man bravo:/export/ws/&/man files \ /vers1.0 alpha:/export/ws/&/vers1.0 \ /vers2.0 bravo:/export/ws/&/vers2.0 \ /vers3.0 bravo:/export/ws/&/vers3.0 \ /man bravo:/export/ws/&/man drivers \ /vers1.0 alpha:/export/ws/&/vers1.0 \ /man bravo:/export/ws/&/man tools \ / delta:/export/ws/& |
Although the map now appears to be much larger, the map still contains only the five entries. Each entry is larger because each entry contains multiple mounts. For instance, a reference to /ws/compiler requires three mounts for the vers1.0, vers2.0, and man directories. The backslash at the end of each line informs autofs that the entry is continued onto the next line. Effectively, the entry is one long line, though line breaks and some indenting have been used to make the entry more readable. The tools directory contains software development tools for all subprojects, so this directory is not subject to the same subdirectory structure. The tools directory continues to be a single mount.
This arrangement provides the administrator with much flexibility. Software projects typically consume substantial amounts of disk space. Through the life of the project, you might be required to relocate and expand various disk partitions. If these changes are reflected in the auto_ws map, the users do not need to be notified, as the directory hierarchy under /ws is not changed.
Because the servers alpha and bravo view the same autofs map, any users who log in to these computers can find the /ws name space as expected. These users are provided with direct access to local files through loopback mounts instead of NFS mounts.
You need to assemble a shared name space for local executables, and applications, such as spreadsheet applications and word-processing packages. The clients of this namespace use several different workstation architectures that require different executable formats. Also, some workstations are running different releases of the operating system.
Create the auto_local map with the nistbladm command.
See the System Administration Guide: Naming and Directory Services (FNS and NIS+).
Choose a single, site-specific name for the shared namespace. This name makes the files and directories that belong to this space easily identifiable.
For example, if you choose /usr/local as the name, the path /usr/local/bin is obviously a part of this name space.
For ease of user community recognition, create an autofs indirect map. Mount this map at /usr/local. Set up the following entry in the NIS+ (or NIS) auto_master map:
/usr/local auto_local -ro |
Notice that the -ro mount option implies that clients cannot write to any files or directories.
Export the appropriate directory on the server.
Include a bin entry in the auto_local map.
Your directory structure resembles the following:
bin aa:/export/local/bin |
(Optional) To serve clients of different architectures, change the entry by adding the autofs CPU variable.
bin aa:/export/local/bin/$CPU |
For SPARC clients – Place executables in /export/local/bin/sparc.
For IA clients – Place executables in /export/local/bin/i386.
Combine the architecture type with a variable that determines the operating system type of the client.
You can combine the autofs OSREL variable with the CPU variable to form a name that determines both CPU type and OS release.
Create the following map entry.
bin aa:/export/local/bin/$CPU$OSREL |
For clients that are running version 5.6 of the operating system, export the following file systems:
For SPARC clients – Export /export/local/bin/sparc5.6.
For IA clients – Place executables in /export/local/bin/i3865.6.
The best way to share replicated file systems that are read-only is to use failover. See Client-Side Failover for a discussion of failover.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Modify the entry in the autofs maps.
Create the list of all replica servers as a comma-separated list, such as the following:
bin aa,bb,cc,dd:/export/local/bin/$CPU |
Autofs chooses the nearest server. If a server has several network interfaces, list each interface. Autofs chooses the nearest interface to the client, avoiding unnecessary routing of NFS traffic.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Create the following entry in the name service auto_master file, either NIS or NIS+:
/home auto_home -nosuid |
The nosuid option prevents users from creating files with the setuid or setgid bit set.
This entry overrides the entry for /home in a generic local /etc/auto_master file. See the previous example. The override happens because the +auto_master reference to the external name service map occurs before the /home entry in the file. If the entries in the auto_home map include mount options, the nosuid option is overwritten. Therefore, either no options should be used in the auto_home map or the nosuid option must be included with each entry.
Do not mount the home directory disk partitions on or under /home on the server.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Create an entry in the autofs map such as the following:
/usr/local -ro,public bee:/export/share/local |
The public option forces the public handle to be used. If the NFS server does not support a public file handle, the mount fails.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Create an autofs entry such as the following:
/usr/local -ro nfs://bee/export/share/local |
The service tries to use the public file handle on the NFS server. However, if the server does not support a public file handle, the MOUNT protocol is used.
Starting with the Solaris 2.6 release, the default version of /etc/auto_master that is installed has the -nobrowse option added to the entries for /home and /net. In addition, the upgrade procedure adds the -nobrowse option to the /home and /net entries in /etc/auto_master if these entries have not been modified. However, you might have to make these changes manually or to turn off browsability for site-specific autofs mount points after the installation.
You can turn off the browsability feature in several ways. Disable the feature by using a command-line option to the automountd daemon, which completely disables autofs browsability for the client. Or disable browsability for each map entry on all clients by using the autofs maps in either an NIS or NIS+ name space. You can also disable the feature for each map entry on each client, using local autofs maps if no network-wide namespace is being used.
Become superuser or assume an equivalent role on the NFS client.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Perform either of the following steps.
(Optional) If you are using the Solaris 9 release or an earlier release, add the -n option to the startup script.
As root, edit the /etc/init.d/autofs script and add the -n option to the line that starts the autmountd daemon:.
/usr/lib/autofs/automountd -n \ < /dev/null > /dev/console 2>&1 # start daemon |
Restart the autofs service.
# /etc/init.d/autofs stop # /etc/init.d/autofs start |
To disable browsability for all clients, you must employ a name service such as NIS or NIS+. Otherwise, you need to manually edit the automounter maps on each client. In this example, the browsability of the /home directory is disabled. You must follow this procedure for each indirect autofs node that needs to be disabled.
Add the -nobrowse option to the /home entry in the name service auto_master file.
/home auto_home -nobrowse |
Run the automount command on all clients.
The new behavior becomes effective after you run the automount command on the client systems or after a reboot.
# /usr/sbin/automount |
In this example, browsability of the /net directory is disabled. You can use the same procedure for /home or any other autofs mount points.
Check the automount entry in /etc/nsswitch.conf.
For local file entries to have precedence, the entry in the name service switch file should list files before the name service. For example:
automount: files nisplus |
This entry shows the default configuration in a standard Solaris installation.
Check the position of the +auto_master entry in /etc/auto_master.
For additions to the local files to have precedence over the entries in the namespace, the +auto_master entry must be moved to follow /net:
# Master map for automounter # /net -hosts -nosuid /home auto_home /xfn -xfn +auto_master |
A standard configuration places the +auto_master entry at the top of the file. This placement prevents any local changes from being used.
Add the nobrowse option to the /net entry in the /etc/auto_master file.
/net -hosts -nosuid,nobrowse |
On all clients, run the automount command.
The new behavior becomes effective after running the automount command on the client systems or after a reboot.
# /usr/sbin/automount |
When tracking down an NFS problem, remember the main points of possible failure: the server, the client, and the network. The strategy that is outlined in this section tries to isolate each individual component to find the one that is not working. In all situations, the mountd and nfsd daemons must be running on the server in order for remote mounts to succeed.
The mountd and nfsd daemons start automatically at boot time only if NFS share entries are in the /etc/dfs/dfstab file. Therefore, you must start mountd and nfsd manually when you set up sharing for the first time.
The -intr option is set by default for all mounts. If a program hangs with a “server not responding” message, you can kill the program with the keyboard interrupt Control-c.
When the network or server has problems, programs that access hard-mounted remote files fail differently than those programs that access soft-mounted remote files. Hard-mounted remote file systems cause the client's kernel to retry the requests until the server responds again. Soft-mounted remote file systems cause the client's system calls to return an error after trying for awhile. Because these errors can result in unexpected application errors and data corruption, avoid soft mounting.
When a file system is hard mounted, a program that tries to access the file system hangs if the server fails to respond. In this situation, the NFS system displays the following message on the console:
NFS server hostname not responding still trying |
When the server finally responds, the following message appears on the console:
NFS server hostname ok |
A program that accesses a soft-mounted file system whose server is not responding generates the following message:
NFS operation failed for server hostname: error # (error_message) |
Because of possible errors, do not soft-mount file systems with read-write data or file systems from which executables are run. Writable data could be corrupted if the application ignores the errors. Mounted executables might not load properly and can fail.
To determine where the NFS service has failed, you need to follow several procedures to isolate the failure. Check for the following items:
Can the client reach the server?
Can the client contact the NFS services on the server?
Are the NFS services running on the server?
In the process of checking these items, you might notice that other portions of the network are not functioning. For example, the name service or the physical network hardware might not be functioning. The System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) contains debugging procedures for several name services. Also, during the process you might see that the problem is not at the client end. An example is if you get at least one trouble call from every subnet in your work area. In this situation, you should assume that the problem is the server or the network hardware near the server. So, you should start the debugging process at the server, not at the client.
Check that the NFS server is reachable from the client. On the client, type the following command.
% /usr/sbin/ping bee bee is alive |
If the command reports that the server is alive, remotely check the NFS server. See How to Check the NFS Server Remotely.
If the server is not reachable from the client, ensure that the local name service is running.
For NIS+ clients, type the following:
% /usr/lib/nis/nisping -u Last updates for directory eng.acme.com. : Master server is eng-master.acme.com. Last update occurred at Mon Jun 5 11:16:10 1995 Replica server is eng1-replica-58.acme.com. Last Update seen was Mon Jun 5 11:16:10 1995 |
If the name service is running, ensure that the client has received the correct host information by typing the following:
% /usr/bin/getent hosts bee 129.144.83.117 bee.eng.acme.com |
If the host information is correct, but the server is not reachable from the client, run the ping command from another client.
If the command run from a second client fails, see How to Verify the NFS Service on the Server.
If the server is reachable from the second client, use ping to check connectivity of the first client to other systems on the local net.
If this command fails, check the networking software configuration on the client (/etc/netmasks, /etc/nsswitch.conf, and so forth).
If the software is correct, check the networking hardware.
Try moving the client onto a second net drop.
Check that the NFS services have started on the NFS server by typing the following command:
% rpcinfo -s bee|egrep 'nfs|mountd' 100003 3,2 tcp,udp,tcp6,upd6 nfs superuser 100005 3,2,1 ticots,ticotsord,tcp,tcp6,ticlts,udp,upd6 mountd superuser |
If the daemons have not been started, see How to Restart NFS Services.
Check that the server's nfsd processes are responding.
On the client, type the following command to test the UDP NFS connections from the server.
% /usr/bin/rpcinfo -u bee nfs program 100003 version 2 ready and waiting program 100003 version 3 ready and waiting |
If the server is running, it prints a list of program and version numbers. Using the -t option tests the TCP connection. If this command fails, proceed to How to Verify the NFS Service on the Server.
Check that the server's mountd is responding, by typing the following command.
% /usr/bin/rpcinfo -u bee mountd program 100005 version 1 ready and waiting program 100005 version 2 ready and waiting program 100005 version 3 ready and waiting |
If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Using the -t option tests the TCP connection. If either attempt fails, proceed to How to Verify the NFS Service on the Server.
Check the local autofs service if it is being used:
% cd /net/wasp |
Choose a /net or /home mount point that you know should work properly. If this command fails, then as root on the client, type the following to restart the autofs service:
# /etc/init.d/autofs stop # /etc/init.d/autofs start |
Verify that file system is shared as expected on the server.
% /usr/sbin/showmount -e bee /usr/src eng /export/share/man (everyone) |
Check the entry on the server and the local mount entry for errors. Also, check the namespace. In this instance, if the first client is not in the eng netgroup, that client cannot mount the /usr/src file system.
Check all entries that include mounting information in all of the local files. The list includes /etc/vfstab and all the /etc/auto_* files.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Check that the server can reach the clients.
# ping lilac lilac is alive |
If the client is not reachable from the server, ensure that the local name service is running. For NIS+ clients, type the following:
% /usr/lib/nis/nisping -u Last updates for directory eng.acme.com. : Master server is eng-master.acme.com. Last update occurred at Mon Jun 5 11:16:10 1995 Replica server is eng1-replica-58.acme.com. Last Update seen was Mon Jun 5 11:16:10 1995 |
If the name service is running, check the networking software configuration on the server (/etc/netmasks, /etc/nsswitch.conf, and so forth).
Type the following command to check whether the rpcbind daemon is running.
# /usr/bin/rpcinfo -u localhost rpcbind program 100000 version 1 ready and waiting program 100000 version 2 ready and waiting program 100000 version 3 ready and waiting |
If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. If rpcbind seems to be hung, either reboot the server or follow the steps in How to Warm-Start rpcbind.
Type the following command to check whether the nfsd daemon is running.
# rpcinfo -u localhost nfs program 100003 version 2 ready and waiting program 100003 version 3 ready and waiting # ps -ef | grep nfsd root 232 1 0 Apr 07 ? 0:01 /usr/lib/nfs/nfsd -a 16 root 3127 2462 1 09:32:57 pts/3 0:00 grep nfsd |
If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.
Type the following command to check whether the mountd daemon is running.
# /usr/bin/rpcinfo -u localhost mountd program 100005 version 1 ready and waiting program 100005 version 2 ready and waiting program 100005 version 3 ready and waiting # ps -ef | grep mountd root 145 1 0 Apr 07 ? 21:57 /usr/lib/autofs/automountd root 234 1 0 Apr 07 ? 0:04 /usr/lib/nfs/mountd root 3084 2462 1 09:30:20 pts/3 0:00 grep mountd |
If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
To enable daemons without rebooting, type the following commands.
# /etc/init.d/nfs.server stop # /etc/init.d/nfs.server start |
This remedy stops and restarts the daemons if an entry is in /etc/dfs/dfstab.
If the NFS server cannot be rebooted because of work in progress, you can restart rpcbind without having to restart all of the services that use RPC. Just complete a warm start as described in this procedure.
Become superuser or assume an equivalent role.
For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.
Determine the PID for rpcbind.
Run ps to get the PID, which is the value in the second column.
# ps -ef |grep rpcbind root 115 1 0 May 31 ? 0:14 /usr/sbin/rpcbind root 13000 6944 0 11:11:15 pts/3 0:00 grep rpcbind |
Send a SIGTERM signal to the rpcbind process.
In this example, term is the signal that is to be sent and 115 is the PID for the program (see the kill(1) man page). This command causes rpcbind to create a list of the current registered services in /tmp/portmap.file and /tmp/rpcbind.file.
# kill -s term 115 |
If you do not kill the rpcbind process with the -s term option, you cannot complete a warm start of rpcbind. You must reboot the server to restore service.
Restart rpcbind.
Warm-restart the command so that the files that were created by the kill command are consulted. A warm-start also ensures that the process resumes without requiring a restart of all of the RPC services. See the rpcbind(1M) man page.
# /usr/sbin/rpcbind -w |
Run the nfsstat command with the -m option to gather current NFS information. The name of the current server is printed after “currserver=”.
% nfsstat -m /usr/local from bee,wasp:/export/share/local Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,synlink, acl,rsize=32768,wsize=32678,retrans=5 Failover: noresponse=0, failover=0, remap=0, currserver=bee |
In the Solaris 2.6 release and in any versions of the mount command that were patched after the 2.6 release, no warning is issued for invalid options. The following procedure helps determine whether the options that were supplied either on the command line or through /etc/vfstab were valid.
For this example, assume that the following command has been run:
# mount -F nfs -o ro,vers=2 bee:/export/share/local /mnt |
Verify the options by running the following command.
% nfsstat -m /mnt from bee:/export/share/local Flags: vers=2,proto=tcp,sec=sys,hard,intr,dynamic,acl,rsize=8192,wsize=8192, retrans=5 |
The file system from bee has been mounted with the protocol version set to 2. Unfortunately, the nfsstat command does not display information about all of the options. However, using the nfsstat command is the most accurate way to verify the options.
Check the entry in /etc/mnttab.
The mount command does not allow invalid options to be added to the mount table. Therefore, verify that the options that are listed in the file match those options that are listed on the command line. In this way, you can check those options that are not reported by the nfsstat command.
# grep bee /etc/mnttab bee:/export/share/local /mnt nfs ro,vers=2,dev=2b0005e 859934818 |
Occasionally, you might encounter problems with autofs. This section should improve the problem-solving process. The section is divided into two subsections.
This section presents a list of the error messages that autofs generates. The list is divided into two parts:
Error messages that are generated by the verbose (-v) option of automount
Error messages that might appear at any time
Each error message is followed by a description and probable cause of the message.
When troubleshooting, start the autofs programs with the verbose (-v) option. Otherwise, you might experience problems without knowing why.
The following paragraphs are labeled with the error message you are likely to see if autofs fails, and a description of the possible problem.
While scanning a direct map, autofs has found an entry key without a prefixed /. Keys in direct maps must be full path names.
While scanning an indirect map, autofs has found an entry key that contains a /. Indirect map keys must be simple names—not path names.
The mount daemon on the server refuses to provide a file handle for server:pathname. Check the export table on the server.
Autofs was unable to create a mount point that was required for a mount. This problem most frequently occurs when you attempt to hierarchically mount all of a server's exported file systems. A required mount point can exist only in a file system that cannot be mounted, which means the file system cannot be exported. The mount point cannot be created because the exported parent file system is exported read-only.
Autofs has discovered an entry in an automount map that contains leading spaces. This problem is usually an indication of an improperly continued map entry. For example:
fake /blat frobz:/usr/frotz |
In this example, the warning is generated when autofs encounters the second line because the first line should be terminated with a backslash (\).
The required map cannot be located. This message is produced only when the -v option is used. Check the spelling and path name of the map name.
remount server:pathname on mountpoint: server not responding
Autofs has failed to remount a file system that it previously unmounted.
Autofs is attempting to mount over an existing mount point. This message means an internal error occurred in autofs (an anomaly).
The automounter mount point must be given as a full path name. Check the spelling and path name of the mount point.
Autofs does not allow its mount points to have a hierarchical relationship. An autofs mount point must not be contained within another automounted file system.
Autofs attempted to contact server, but received no response.
hostname: exports: rpc_err
An error occurred while getting the export list from hostname. This message indicates a server or network problem.
The map entry is malformed, and autofs cannot interpret the entry. Recheck the entry. Perhaps the entry has characters that need escaping.
mapname: nis_err
An error occurred when looking up an entry in a NIS map. This message can indicate NIS problems.
Autofs failed to do a mount. This occurrence can indicate a server or network problem.
Autofs cannot mount itself on mountpoint because it is not a directory. Check the spelling and path name of the mount point.
Autofs cannot send a query packet to a server in a list of replicated file system locations.
Autofs cannot receive replies from any of the servers in a list of replicated file system locations.
All these error messages indicate problems in attempting to ping servers for a replicated file system. This message can indicate a network problem.
Autofs failed to get pathconf information for the path name (see the fpathconf(2) man page).
Autofs is unable to contact the mount daemon on server that provides the information to pathconf().
If the /etc/auto* files have the execute bit set, the automounter tries to execute the maps, which creates messages such as the following :
/etc/auto_home: +auto_home: not found
In this situation, the auto_home file has incorrect permissions. Each entry in the file generates an error message that is similar to this message. The permissions to the file should be reset by typing the following command:
# chmod 644 /etc/auto_home |
This section shows an error message that is followed by a description of the conditions that should create the error and at minimum one remedy.
Bad argument specified with index option - must be a file
You must include a file name with the index option. You cannot use directory names.
Cannot establish NFS service over /dev/tcp: transport setup problem
This message is often created when the services information in the namespace has not been updated. The message can also be reported for UDP. To fix this problem, you must update the services data in the namespace. For NIS+, the entries should be as follows:
nfsd nfsd tcp 2049 NFS server daemon nfsd nfsd udp 2049 NFS server daemon |
For NIS and /etc/services, the entries should be as follows:
nfsd 2049/tcp nfs # NFS server daemon nfsd 2049/udp nfs # NFS server daemon |
Cannot use index option without public option
Include the public option with the index option in the share command. You must define the public file handle in order for the index option to work.
The Solaris 2.5.1 release required that the public file handle be set by using the share command. A change in the Solaris 2.6 release sets the public file handle to be root (/) by default. This error message is no longer relevant.
Could not start daemon: error
This message is displayed if the daemon terminates abnormally or if a system call error occurs. The error string defines the problem.
Could not use public filehandle in request to server
This message is displayed if the public option is specified but the NFS server does not support the public file handle. In this situation, the mount fails. To remedy this situation, either try the mount request without using the public file handle or reconfigure the NFS server to support the public file handle.
daemon running already with pid pid
The daemon is already running. If you want to run a new copy, kill the current version and start a new version.
error locking lock file
This message is displayed when the lock file that is associated with a daemon cannot be locked properly.
error checking lock file: error
This message is displayed when the lock file that is associated with a daemon cannot be opened properly.
NOTICE: NFS3: failing over from host1 to host2
This message is displayed on the console when a failover occurs. The message is advisory only.
filename: File too large
An NFS version 2 client is trying to access a file that is over 2 Gbytes.
mount: ... server not responding:RPC_PMAP_FAILURE - RPC_TIMED_OUT
The server that is sharing the file system you are trying to mount is down or unreachable, at the wrong run level, or its rpcbind is dead or hung.
mount: ... server not responding: RPC_PROG_NOT_REGISTERED
The mount request registered with rpcbind, but the NFS mount daemon mountd is not registered.
Either the remote directory or the local directory does not exist. Check the spelling of the directory names. Run ls on both directories.
mount: ...: Permission denied
Your computer name might not be in the list of clients or netgroup that is allowed access to the file system you tried to mount. Use showmount -e to verify the access list.
NFS fsstat failed for server hostname: RPC: Authentication error
This error can be caused by many situations. One of the most difficult situations to debug is when this problem occurs because a user is in too many groups. Currently, a user can be in up to 16 groups but no more if the user is accessing files through NFS mounts. An alternate does exist for users who need to be in more than 16 groups. You can use access control lists to provide the needed access privileges, if you run at minimum the Solaris 2.5 release on the NFS server and the NFS clients.
nfs mount: ignoring invalid option “-option”
The -option flag is not valid. Refer to the mount_nfs(1M) man page to verify the required syntax.
This error message is not displayed when running any version of the mount command that is included in a Solaris release from 2.6 to the current release or in earlier versions that have been patched.
nfs mount: NFS can't support “nolargefiles”
An NFS client has attempted to mount a file system from an NFS server by using the -nolargefiles option. This option is not supported for NFS file system types.
nfs mount: NFS V2 can't support “largefiles”
The NFS version 2 protocol cannot handle large files. You must use version 3 if access to large files is required.
NFS server hostname not responding still trying
If programs hang while doing file-related work, your NFS server might have failed. This message indicates that NFS server hostname is down or that a problem has occurred with the server or the network. If failover is being used, hostname is a list of servers. Start troubleshooting with How to Check Connectivity on an NFS Client.
port number in nfs URL not the same as port number in port option
The port number that is included in the NFS URL must match the port number that is included with the -port option to mount. If the port numbers do not match, the mount fails. Either change the command to make the port numbers identical or do not specify the port number that is incorrect. Usually, you do not need to specify the port number in the NFS URL and with the -port option.
replicas must have the same version
For NFS failover to function properly, the NFS servers that are replicas must support the same version of the NFS protocol. Mixing versions is not allowed.
replicated mounts must be read-only
NFS failover does not work on file systems that are mounted read-write. Mounting the file system read-write increases the likelihood that a file could change. NFS failover depends on the file systems being identical.
replicated mounts must not be soft
Replicated mounts require that you wait for a timeout before failover occurs. The soft option requires that the mount fail immediately when a timeout starts, so you cannot include the -soft option with a replicated mount.
share_nfs: Cannot share more than one filesystem with 'public' option
Check that the /etc/dfs/dfstab file has only one file system selected to be shared with the -public option. Only one public file handle can be established per server, so only one file system per server can be shared with this option.
WARNING: No network locking on hostname:path: contact admin to install server change
An NFS client has unsuccessfully attempted to establish a connection with the network lock manager on an NFS server. Rather than fail the mount, this warning is generated to warn you that locking does not work.