System Administration Guide, Volume 3

Chapter 30 Remote File-System Administration

This chapter provides information on how to perform such NFS administration tasks as setting up NFS services, adding new file systems to share, mounting file systems, using the Secure NFS system, or using the WebNFS functionality. The last part of the chapter includes troubleshooting procedures and a list of many of the NFS error messages and their meanings.

Your responsibilities as an NFS administrator depend on your site's requirements and the role of your computer on the network. You might be responsible for all the computers on your local network, in which case you might be responsible for determining these configuration items:

Maintaining a server after it has been set up involves the following tasks:

Remember, a computer can be both a server and a client--sharing local file systems with remote computers and mounting remote file systems.

Automatic File-System Sharing

Servers provide access to their file systems by sharing them over the NFS environment. You specify which file systems are to be shared with the share command or the /etc/dfs/dfstab file.

Entries in the /etc/dfs/dfstab file are shared automatically whenever you start NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. For example, if your computer is a server that supports home directories, you need to make the home directories available at all times. Most file-system sharing should be done automatically; the only time that manual sharing should occur is during testing or troubleshooting.

The dfstab file lists all the file systems that your server shares with its clients and controls which clients can mount a file system. If you want to modify dfstab to add or delete a file system or to modify the way sharing is done, edit the file with any supported text editor (such as vi). The next time the computer enters run level 3, the system reads the updated dfstab to determine which file systems should be shared automatically.

Each line in the dfstab file consists of a share command--the same command you type at the command-line prompt to share the file system. The share command is located in /usr/sbin.

Table 30-1 File-System Sharing Task Map

Task 

Description 

For Instructions, Go to ... 

Establish automatic file-system sharing 

 Steps to configure a server so that file systems are automatically shared when the server is rebooted."How to Set Up Automatic File-System Sharing"

Enable WebNFS 

 Steps to configure a server so that users can access files using WebNFS"How to Enable WebNFS Access"

Enabling NFS server logging 

 Steps to configure a server so that NFS logging is run on selected file systems"How to Enable NFS Server Logging"

How to Set Up Automatic File-System Sharing

  1. Become superuser.

  2. Add entries for each file system to be shared.

    Edit /etc/dfs/dfstab and add one entry to the file for each file system that you want to be automatically shared. Each entry must be on a line by itself in the file and uses this syntax:


    share [-F nfs] [-o specific-options] [-d description] pathname

    See the share_nfs(1M) man page for a complete list of options.

  3. Check that the NFS service is running on the server.

    If this is the first share command or set of share commands that you have initiated, it is likely that the NFS daemons are not running. The following commands kill the daemons and restart them.


    # /etc/init.d/nfs.server stop
    # /etc/init.d/nfs.server start
    

    This ensures that NFS service is now running on the servers and will restart automatically when the server is at run level 3 during boot.

Where to Go From Here

The next step is to set up your autofs maps so that clients can access the file systems you have shared on the server. See "Autofs Administration Task Overview".

How to Enable WebNFS Access

Starting with the 2.6 release, by default all file systems that are available for NFS mounting are automatically available for WebNFS access. The only time that this procedure needs to be followed is on servers that do not already allow NFS mounting, if resetting the public file handle is useful to shorten NFS URLs, or if the -index option is required.

  1. Become superuser.

  2. Add entries for each file system to be shared using the WebNFS service.

    Edit /etc/dfs/dfstab and add one entry to the file for each file system using the -public option. The -index tag shown in the following example is optional.


    share -F nfs -o ro,public,index=index.html /export/ftp

    See the share_nfs(1M) man page for a complete list of options.

  3. Check that the NFS service is running on the server.

    If this is the first share command or set of share commands that you have initiated, it is likely that the NFS daemons are not running. The following commands kill and restart the daemons.


    # /etc/init.d/nfs.server stop
    # /etc/init.d/nfs.server start
    
  4. Share the file system.

    After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS daemons were restarted in step 2, this command does not need to be run because the script runs the command.


    # shareall
    
  5. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public,index=index.html  ""

How to Enable NFS Server Logging

  1. Become superuser.

  2. Optional: Change file system configuration settings.

    In /etc/nfs/nfslog.conf, you can either edit the default settings for all file systems by changing the data associated with the global tag or you can add a new tag for this file system. If these changes are not needed you do not need to change this file. The format of /etc/nfs/nfslog.conf is described in nfslog.conf(1).

  3. Add entries for each file system to be shared using NFS server logging.

    Edit /etc/dfs/dfstab and add one entry to the file for the file system that you want to have NFS server logging enabled on. The tag used with the log=tag option must be entered in /etc/nfs/nfslog.conf. This example uses the default settings in the global tag.


    share -F nfs -o ro,log=global /export/ftp

    See the share_nfs(1M) man page for a complete list of options.

  4. Check that the NFS service is running on the server.

    If this is the first share command or set of share commands that you have initiated, it is likely that the NFS daemons are not running. The following commands kill and restart the daemons.


    # /etc/init.d/nfs.server stop
    # /etc/init.d/nfs.server start
    
  5. Share the file system.

    After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS daemons were restarted earlier, this command does not need to be run because the script runs the command.


    # shareall
    
  6. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,log=global  ""
  7. Start the NFS log daemon, nfslogd, if it is not running already.

    Restarting the NFS daemons using the nfs.server script will start the daemon if the nfslog.conf file exists. Otherwise the command needs to be run once by hand to create the files so that the command will automatically restart when the server is rebooted.


    # /usr/lib/nfs/nfslogd
    

Mounting File Systems

You can mount file systems in several ways. They can be mounted automatically when the system is booted, on demand from the command line, or through the automounter. The automounter provides many advantages to mounting at boot time or mounting from the command line, but many situations require a combination of all three. In addition to these three ways of mounting a file system, there are several ways of enabling or disabling processes depending on the options you use when mounting the file system. See the following table for a complete list of the tasks associated with file system mounting.

Table 30-2 Mounting File Systems Task Map

Task 

Description 

For Instructions, Go to ... 

Mount a file system at boot time 

 Steps so that a file system is mounted whenever a system is rebooted."How to Mount a File System at Boot Time"

Mount a file system using a command 

 Steps to mount a file system when a system is running. This procedure is useful when testing."How to Mount a File System From the Command Line"

Mount with the automounter 

 Steps to access a file system on demand without using the command line."Mounting With the Automounter"

Disallowing large files 

 Steps to prevent large files from being created on a file system."How to Disable Large Files on an NFS Server"

Using client-side failover 

 Steps to enable the automatic switchover to a working file system if a server fails."How to Use Client-Side Failover"

Disabling mount access for a client 

 Steps to disable the ability of one client to access a remote file system."How to Disable Mount Access for One Client"

Providing access to a file system through a firewall 

 Steps to allow access to a file system through a firewall by using the WebNFS protocol."How to Mount an NFS File System Through a Firewall"

Mounting a file system using a NFS URL 

 Steps to allow access to a file system using an NFS URL. This process allows for file-system access without using the MOUNT protocol."How to Mount an NFS File System Using an NFS URL"

How to Mount a File System at Boot Time

If you want to mount file systems at boot time instead of using autofs maps, follow this procedure. Although you must follow this procedure for all local file systems, it is not recommended for remote file systems because it must be completed on every client.

  1. Become superuser.

  2. Add an entry for the file system to /etc/vfstab.

Entries in the /etc/vfstab file have the following syntax:

special  fsckdev  mountp  fstype  fsckpass  mount-at-boot  mntopts

See the vfstab(4) man page for more information.


Caution - Caution -

NFS servers should not have NFS vfstab entries because of a potential deadlock. The NFS service is started after the entries in /etc/vfstab are checked, so that if two servers that are mounting file systems from each other fail at the same time, each system could hang as the systems reboot.


Example of a vfstab entry

You want a client computer to mount the /var/mail directory from the server wasp. You would like the file system to be mounted as /var/mail on the client and you want the client to have read-write access. Add the following entry to the client's vfstab file.


wasp:/var/mail - /var/mail nfs - yes rw

How to Mount a File System From the Command Line

Mounting a file system from the command line is often done to test a new mount point or to allow for temporary access to a file system that is not available through the automounter.

  1. Become superuser.

  2. Mount the file system.

    Type the following command:


    # mount -F nfs -o ro bee:/export/share/local /mnt
    

    In this case, the /export/share/local file system from the server bee is mounted on read-only /mnt on the local system. Mounting from the command line allows for temporary viewing of the file system. You can unmount the file system with umount or by rebooting the local host.


    Caution - Caution -

    Starting with the 2.6 release, all versions of the mount command will not warn about invalid options. The command silently ignores any options that cannot be interpreted. Make sure you verify all of the options that were used, to prevent unexpected behavior.


Mounting With the Automounter

"Autofs Administration Task Overview" includes the specific instructions for establishing and supporting mounts with the automounter. Without any changes to the generic system, clients should be able to access remote file systems through the /net mount point. To mount the /export/share/local file system from the previous example, all you need to do is type:


% cd /net/bee/export/share/local

Because the automounter allows all users to mount file systems, root access is not required. It also provides for automatic unmounting of file systems, so there is no need to unmount file systems after you are finished.

How to Disable Large Files on an NFS Server

For servers that are supporting clients that cannot handle a file over 2 GBytes, it is necessary to disable the ability to create large files.


Note -

Previous versions of the Solaris operating environment cannot use large files. Check that clients of the NFS server are running at least the 2.6 release if the clients need to access large files.


  1. Become superuser.

  2. Check that no large files exist on the file system.

    Here is an example of a command that you can run to locate large files:


    # cd /export/home1
    # find . -xdev -size +2000000 -exec ls -l {} \;
    

    If large files are on the file system, you must remove or move them to another file system.

  3. Unmount the file system.


    # umount /export/home1
    
  4. Reset the file system state if the file system has been mounted using -largefiles.

    fsck resets the file system state if no large files exist on the file system:


    # fsck /export/home1
    
  5. Mount the file system using nolargefiles.


    # mount -F ufs -o nolargefiles /export/home1
    

    You can do this from the command line, but to make the option more permanent, add an entry like the following into /etc/vfstab:


    /dev/dsk/c0t3d0s1 /dev/rdsk/c0t3d0s1 /export/home1  ufs  2  yes  nolargefiles

How to Use Client-Side Failover

  1. Become superuser.

  2. On the NFS client, mount the file system using the ro option.

    You can do this from the command line, through the automounter, or by adding an entry to /etc/vfstab that looks like:


    bee,wasp:/export/share/local  -  /usr/local  nfs  -  no  -o ro

    This syntax has been allowed by the automounter in earlier releases, but the failover was not available while file systems were mounted, only when a server was being selected.


    Note -

    Servers that are running different versions of the NFS protocol cannot be mixed using a command line or in a vfstab entry. Mixing servers supporting NFS V2 and V3 protocols can only be done with autofs, in which case the best subset of version 2 or version 3 servers is used.


How to Disable Mount Access for One Client

  1. Become superuser.

  2. Add an entry in /etc/dfs/dfstab.

    The first example allows mount access to all clients in the eng netgroup except the host named rose. The second example allows mount access to all clients in the eng.sun.com DNS domain except for rose.


    share -F nfs -o ro=-rose:eng /export/share/man
    share -F nfs -o ro=-rose:.eng.sun.com /export/share/man

    For additional information on access lists, see "Setting Access Lists With the share Command".

  3. Share the file system.

    The NFS server does not use changes to /etc/dfs/dfstab until the file systems are shared again or until the server is rebooted.


    # shareall

How to Mount an NFS File System Through a Firewall

  1. Become superuser.

  2. Manually mount the file system, using a command like:


    # mount -F nfs -o public bee:/export/share/local /mnt
    

    In this example the file system /export/share/local is mounted on the local client using the public file handle. An NFS URL can be used instead of the standard path name. If the public file handle is not supported by the server bee, the mount operation will fail.


    Note -

    This procedure requires that the file system on the NFS server be shared using the public option and any firewalls between the client and the server allow TCP connections on port 2049. Starting with the 2.6 release, all file systems that are shared allow for public file handle access.


How to Mount an NFS File System Using an NFS URL

  1. Become superuser.

  2. Manually mount the file system, using a command such as:


    # mount -F nfs nfs://bee:3000/export/share/local /mnt
    

    In this example, the /export/share/local file system is being mounted from the server bee using NFS port number 3000. The port number is not required and by default uses the standard NFS port number of 2049. You can include the public option with an NFS URL, if you want. Without the public option, the MOUNT protocol is used if the public file handle is not supported by the server. The public option will force the use of the public file handle, and the mount will fail if the public file handle is not supported.

Setting Up NFS Services

This section discusses some of the tasks necessary to initialize or use NFS services.

Table 30-3 NFS Services Task Map

Task 

Description 

For Instructions, Go To ... 

Start the NFS server 

 Steps to start the NFS service, if it has not been started automatically."How to Start the NFS Services"

Stop the NFS server 

 Steps to stop the NFS service. Normally the service should not need to be stopped."How to Stop the NFS Services"

Start the automounter 

 Steps to start the automounter. This procedure is required when some of the automounter maps are changed."How to Start the Automounter"

Stop the automounter 

 Steps to stop the automounter. This procedure is required when some of the automounter maps are changed."How to Stop the Automounter"

How to Start the NFS Services

  1. Become superuser.

  2. Enable the NFS service daemons.

    Type the following command:


    # /etc/init.d/nfs.server start
    

    This starts the daemons if there is an entry in /etc/dfs/dfstab.

How to Stop the NFS Services

  1. Become superuser.

  2. Disable the NFS service daemons.

    Type the following command:


    # /etc/init.d/nfs.server stop
    

How to Start the Automounter

  1. Become superuser.

  2. Enable the autofs daemon.

    Type the following command:


    # /etc/init.d/autofs start
    

    This starts the daemon.

How to Stop the Automounter

  1. Become superuser.

  2. Disable the autofs daemon.

    Type the following command:


    # /etc/init.d/autofs stop
    

Administering the Secure NFS System

To use the Secure NFS system, all the computers you are responsible for must have a domain name. A domain is an administrative entity, typically consisting of several computers, that is part of a larger network. If you are running NIS+, you should also establish the NIS+ name service for the domain. See Solaris Naming Setup and Configuration Guide.

You can configure the Secure NFS environment to use either Diffie-Hellman.. "Managing System Security (Overview)" in System Administration Guide, Volume 2 discusses this authentication service.

How to Set Up a Secure NFS Environment With DH Authentication

  1. Assign your domain a domain name, and make the domain name known to each computer in the domain.

    See the Solaris Naming Administration Guide if you are using NIS+ as your name service.

  2. Establish public keys and secret keys for your clients' users using the newkey or nisaddcred command, and have each user establish his or her own secure RPC password using the chkey command.


    Note -

    For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.


    When public and secret keys have been generated, the public and encrypted secret keys are stored in the publickey database.

  3. Verify that the name service is responding. If you are running NIS+, type the following:


    # nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995

    If you are running NIS, verify that the ypbind daemon is running.

  4. Verify that the keyserv daemon (the key server) is running.

    Type the following command.


    # ps -ef | grep keyserv
    root    100     1  16    Apr 11 ?      0:00 /usr/sbin/keyserv
    root	  2215  2211   5  09:57:28 pts/0  0:00 grep keyserv

    If the daemon isn't running, start the key server by typing the following:


    # /usr/sbin/keyserv
    
  5. Decrypt and store the secret key.

    Usually, the login password is identical to the network password. In this case, keylogin is not required. If the passwords are different, the users have to log in, and then do a keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.


    Note -

    You only need to run keylogin -r if the root secret key changes or /etc/.rootkey is lost.


  6. Update mount options for the file system.

    Edit the /etc/dfs/dfstab file and add the sec=dh option to the appropriate entries (for Diffie-Hellman authentication).


    share -F nfs -o sec=dh /export/home
    
  7. Update the automounter maps for the file system.

    Edit the auto_master data to include sec=dh as a mount option in the appropriate entries (for Diffie-Hellman authentication):


    /home	auto_home	-nosuid,sec=dh

    Note -

    With 2.5 and earlier Solaris releases, if a client does not mount as secure a file system that is shared as secure, users have access as user nobody, rather than as themselves. With Version 2 on later releases, the NFS server refuses access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode is inherited from the NFS server, so clients do not need to specify sec=krb4 or sec=dh. The users have access to the files as themselves.


    When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you do not establish new keys or change them for root. If you do delete /etc/.rootkey, you can always type:


    # keylogin -r
    

WebNFS Administration Tasks

This section provides instructions for administering the WebNFS system. This is a list of some related tasks.

Table 30-4 WebNFS Administration Task Map

Task 

Description 

For Instructions, Go To ... 

Plan for WebNFS 

 Issues to consider before enabling the WebNFS service."Planning for WebNFS Access"

Enable WebNFS 

 Steps to enable mounting of an NFS file system using the WebNFS protocol."How to Enable WebNFS Access"

Enabling WebNFS through a firewall 

 Steps to allow access to files through a firewall by using the WebNFS protocol."Enabling WebNFS Access Through a Firewall"

Browsing using an NFS URL 

 Instructions for using an NFS URL within a web browser."Browsing Using an NFS URL"

Using a public file handle with autofs 

 Steps to force use of the public file handle when mounting a file system with the automounter."How to Use a Public File Handle With Autofs"

Using an NFS URL with autofs 

 Steps to add an NFS URL to the automounter maps."How to Use NFS URLs With Autofs"

Providing access to a file system through a firewall 

 Steps to allow access to a file system through a firewall using the WebNFS protocol."How to Mount an NFS File System Through a Firewall"

Mounting a file system using an NFS URL 

 Steps to allow access to a file system using an NFS URL. This process allows for file system access without using the MOUNT protocol."How to Mount an NFS File System Using an NFS URL"

Planning for WebNFS Access

To use the WebNFS functionality, you first need an application capable of running and loading an NFS URL (for example, nfs://server/path). The next step is to choose the file system that will be exported for WebNFS access. If the application is web browsing, often the document root for the web server is used. Several factors need to be considered when choosing a file system to export for WebNFS access.

  1. Each server has one public file handle that by default is associated with the server's root file system. The path in an NFS URL is evaluated relative to the directory with which the public file handle is associated. If the path leads to a file or directory within an exported file system, the server provides access. You can use the -public option of the share command to associate the public file handle with a specific exported directory. Using this option allows URLs to be relative to the shared file system rather than to the servers' root file system. By default the public file handle points to the root file system, but this file handle does not allow web access unless the root file system is shared.

  2. The WebNFS environment allows users who already have mount privileges to access files through a browser regardless of whether the file system is exported using the -public option. Because users already have access to these files through the NFS setup, this should not create any additional security risk. You only need to share a file system using the -public option if users who cannot mount the file system need to use WebNFS access.

  3. File systems that are already open to the public make good candidates for using the -public option, like the top directory in an ftp archive or the main URL directory for a web site.

  4. You can use the -index option with the share command to force the loading of an HTML file instead of listing the directory when an NFS URL is accessed.

    After a file system is chosen, review the files and set access permissions to restrict viewing of files or directories as needed. Establish the permissions as appropriate for any NFS file system that is being shared. For many sites, 755 permissions for directories and 644 permissions for files provides the correct level of access.

    Additional factors need to be considered if both NFS and HTTP URLs are to be used to access one eb site. These are described in "WebNFS Limitations With Web Browser Use".

Browsing Using an NFS URL

Browsers capable of supporting WebNFS access should provide access using an NFS URL that looks something like:


nfs://server<:port>/path

server

Name of the file server 

port

Port number to use (the default value is 2049)

path

Path to file, which can be relative to the public file handle or to the root file system 


Note -

In most browsers, the URL service type (for example, nfs or http) is remembered from one transaction to the next, unless a URL that includes a different service type is loaded. When using NFS URLs, if a reference to an HTTP URL is loaded, subsequent pages are loaded using the HTTP protocol instead of the NFS protocol, unless the URLs specify an NFS URL.


Enabling WebNFS Access Through a Firewall

You can enable WebNFS access for clients that are not part of the local subnet by configuring the firewall to allow a TCP connection on port 2049. Just allowing access for httpd does not allow NFS URLs to be used.

Autofs Administration Task Overview

This section describes some of the most common tasks you might encounter in your own environment. Recommended procedures are included for each scenario to help you configure autofs to best meet your clients' needs.


Note -

Use the Solstice System Management Tools or see the Solaris Naming Administration Guide to perform the tasks discussed in this section.


Autofs Administration Task Map

The following table lists a description and a pointer to many of the tasks that are related to autofs.

Table 30-5 Autofs Administration Task Map

Task 

Description 

For Instructions, Go To ... 

Start autofs 

 Start the automount service without having the reboot the system"How to Start the Automounter"

Stop autofs 

 Stop the automount service without disabling other network services"How to Stop the Automounter"

Access file systems using autofs 

 Access file systems using the automount service"Mounting With the Automounter"

Modifying the autofs maps 

 Steps to modify the master map, which should be used to list other maps"How to Modify the Master Map"

 

 Steps to modify an indirect map, which should be used for most maps"How to Modify Indirect Maps"

 

 Steps to modify a direct map, which should be used when a direct association between a mount point on a client and a server is required"How to Modify Direct Maps"

Modifying the autofs maps to access non NFS file systems 

 Steps to set up an autofs map with an entry for a CD-ROM application"How to Access CD-ROM Applications With Autofs"

 

 Steps to set up an autofs map with an entry for a PC-DOS diskette "How to Access PC-DOS Data Diskettes With Autofs"

 

 Steps to use autofs to access a CacheFS file system "How to Access NFS File Systems Using CacheFS"

Using /home 

Example of how to set up a common /home map"Setting Up a Common View of /home"

 

Steps to set up a /home map that refers to multiple file systems"How to Set Up /home With Multiple Home Directory File Systems"

Using a new autofs mount point 

 Steps to set up a project-related autofs map"How to Consolidate Project-Related Files Under /ws"

 

 Steps to set up an autofs map that supports different client architectures"How to Set Up Different Architectures to Access a Shared Name Space"

 

 Steps to set up an autofs map that supports different operating systems"How to Support Incompatible Client Operating System Versions"

Replicating file systems with autofs 

 Provide access to file systems that failover"How to Replicate Shared Files Across Several Servers"

Using security restrictions with autofs 

Provide access to file systems while restricting remote root access to the files"How to Apply Security Restrictions"

Using a public file handle with autofs 

 Force use of the public file handle when mounting a file system"How to Use a Public File Handle With Autofs"

Using an NFS URL with autofs 

 Add an NFS URL so that the automounter can use it"How to Use NFS URLs With Autofs"

Disable autofs browsability 

 Steps to disable browability so that autofs mount points are not automatically populated on a single client"How to Completely Disable Autofs Browsability on a Single NFS Client"

 

 Steps to disable browability so that autofs mount points are not automatically populated on all clients"How to Disable Autofs Browsability for All Clients"

 

 Steps to disable browability so that a specific autofs mount point is not automatically populated on a client"How to Disable Autofs Browsability on an NFS Client"

Administrative Tasks Involving Maps

The following tables describe several of the factors you need to be aware of when administering autofs maps. Which type of map and which name service you choose changes the mechanism which you need to use to make changes to the autofs maps.

The following table describes the types of maps and their uses.

Table 30-6 Types of autofs Maps and Their Uses

Type of Map 

Use 

Master

Associates a directory with a map 

Direct

Directs autofs to specific file systems 

Indirect

Directs autofs to reference-oriented file systems 

The following table describes how to make changes to your autofs environment based on your name service.

Table 30-7 Map Maintenance

Name Service 

Method 

Local files 

Text editor 

NIS 

make files

NIS+ 

nistbladm

The next table tells you when to run the automount command, depending on the modification you have made to the type of map. For example, if you have made an addition or a deletion to a direct map, you need to run the automount command on the local system to allow the change take effect; however, if you've modified an existing entry, you do not need to run the automount command for the change to take effect.

Table 30-8 When to Run the automount Command

Type of Map 

Restart automount?

 

Addition or Deletion 

Modification 

auto_master

Y

Y

direct

Y

N

indirect

N

N

Modifying the Maps

The following procedures require that you use NIS+ as your name service.

How to Modify the Master Map

  1. Using the nistbladm command, make the changes you want to the master map.

    See the Solaris Naming Administration Guide.

  2. For each client, become superuser.

  3. For each client, run the automount command to ensure the changes you made take effect.

  4. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers.

The automount command gathers information from the master map whenever it is run.

How to Modify Indirect Maps

    Using the nistbladm command, make the changes you want to the indirect map.

    See the Solaris Naming Administration Guide.

The change takes effect the next time the map is used, which is the next time a mount is done.

How to Modify Direct Maps

  1. Using the nistbladm command, add or delete the changes you want to the direct map.

    See the Solaris Naming Administration Guide.

  2. If you added or deleted a mount-point entry in step 1, run the automount command.

  3. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers.


    Note -

    If you only modify or change the contents of an existing direct map entry, you do not need to run the automount command.


    For example, suppose you modify the auto_direct map so that the /usr/src directory is now mounted from a different server. If /usr/src is not mounted at this time, the new entry takes effect immediately when you try to access /usr/src. If /usr/src is mounted now, you can wait until the auto-unmounting takes place, then access it.


    Note -

    Because of the additional steps, and because they do not take up as much space in the mount table as direct maps, use indirect maps whenever possible. They are easier to construct, and less demanding on the computers' file systems.


Avoiding Mount-Point Conflicts

If you have a local disk partition mounted on /src and you also want to use the autofs service to mount other source directories, you might encounter a problem. If you specify the mount point /src, the service hides the local partition whenever you try to reach it.

You need to mount the partition somewhere else; for example, on /export/src. You would then need an entry in /etc/vfstab like:


/dev/dsk/d0t3d0s5 /dev/rdsk/c0t3d0s5 /export/src ufs 3 yes - 

and this entry in auto_src:


terra		terra:/export/src 

where terra is the name of the computer.

Accessing Non NFS File Systems

Autofs can also mount files other than NFS files. Autofs mounts files on removable media, such as diskettes or CD-ROM. Normally, you would mount files on removable media using the Volume Manager. The following examples show how this mounting could be done through autofs. The Volume Manager and autofs do not work together, so these entries would not be used without first deactivating the Volume Manager.

Instead of mounting a file system from a server, you put the media in the drive and reference it from the map. If you want to access non NFS file systems and you are using autofs, see the following procedures.

How to Access CD-ROM Applications With Autofs


Note -

Use this procedure if you are not using Volume Manager.


  1. Become superuser.

  2. Update the autofs map.

    Add an entry for the CD-ROM file system, which should look like:


    hsfs     -fstype=hsfs,ro     :/dev/sr0

    The CD-ROM device you want to mount must appear as a name following a colon.

How to Access PC-DOS Data Diskettes With Autofs


Note -

Use this procedure if you are not using Volume Manager.


  1. Become superuser.

  2. Update the autofs map.

    Add an entry for the diskette file system such as:


     pcfs     -fstype=pcfs     :/dev/diskette

Accessing NFS File Systems Using CacheFS

The cache file system (CacheFS) is a generic nonvolatile caching mechanism that improves the performance of certain file systems by utilizing a small, fast, local disk.

You can improve the performance of the NFS environment by using CacheFS to cache data from an NFS file system on a local disk.

How to Access NFS File Systems Using CacheFS

  1. Become superuser.

  2. Run the cfsadmin command to create a cache directory on the local disk.


    # cfsadmin -c /var/cache
    
  3. Add the cachefs entry to the appropriate automounter map.

    For example, adding this entry to the master map caches all home directories:


    /home auto_home -fstype=cachefs,cachedir=/var/cache,backfstype=nfs

    Adding this entry to the auto_home map only caches the home directory for the user named rich:


    rich -fstype=cachefs,cachedir=/var/cache,backfstype=nfs dragon:/export/home1/rich

    Note -

    Options that are included in maps that are searched later override options set in maps that are searched earlier. The last options found are the ones that are used. In the previous example, a specific entry added to the auto_home map only needs to include the options listed in the master maps if some of the options needed to be changed.


Customizing the Automounter

You can set up the automounter maps in several ways. The following tasks give detailed instructions on how to customize the automounter maps to provide an easy-to-use directory structure.

Setting Up a Common View of /home

The ideal is for all network users to be able to locate their own, or anyone else's home directory under /home. This view should be common across all computers, whether client or server.

Every Solaris installation comes with a master map: /etc/auto_master.


# Master map for autofs
#
+auto_master
/net     -hosts     -nosuid,nobrowse
/home    auto_home  -nobrowse
/xfn     -xfn

A map for auto_home is also installed under /etc.


# Home directory map for autofs
#
+auto_home

Except for a reference to an external auto_home map, this map is empty. If the directories under /home are to be common to all computers, do not modify this /etc/auto_home map. All home directory entries should appear in the name service files, either NIS or NIS+.


Note -

Users should not be permitted to run setuid executables from their home directories; without this restriction, any user could have superuser privileges on any computer.


How to Set Up /home With Multiple Home Directory File Systems

  1. Become superuser.

  2. Install home directory partitions under /export/home.

    If there are several partitions, install them under separate directories, for example, /export/home1, /export/home2, and so on.

  3. Use the Solstice System Management Tools to create and maintain the auto_home map.

    Whenever you create a new user account, type the location of the user's home directory in the auto_home map. Map entries can be simple, for example:


    rusty        dragon:/export/home1/&
    gwenda       dragon:/export/home1/&
    charles      sundog:/export/home2/&
    rich         dragon:/export/home3/&

    Notice the use of the & (ampersand) to substitute the map key. This is an abbreviation for the second occurrence of rusty in the following example.


    rusty     	dragon:/export/home1/rusty

    With the auto_home map in place, users can refer to any home directory (including their own) with the path /home/user, where user is their login name and the key in the map. This common view of all home directories is valuable when logging in to another user's computer. Autofs mounts your home directory for you. Similarly, if you run a remote windowing system client on another computer, the client program has the same view of the /home directory as you do on the computer providing the windowing system display.

    This common view also extends to the server. Using the previous example, if rusty logs in to the server dragon, autofs there provides direct access to the local disk by loopback-mounting /export/home1/rusty onto /home/rusty.

    Users do not need to be aware of the real location of their home directories. If rusty needs more disk space and needs to have his home directory relocated to another server, you need only change rusty's entry in the auto_home map to reflect the new location. Everyone else can continue to use the /home/rusty path.

How to Consolidate Project-Related Files Under /ws

Assume you are the administrator of a large software development project. You want to make all project-related files available under a directory called /ws. This directory is to be common across all workstations at the site.

  1. Add an entry for the /ws directory to the site auto_master map, either NIS or NIS+.


    /ws     auto_ws     -nosuid 

    The auto_ws map determines the contents of the /ws directory.

  2. Add the -nosuid option as a precaution.

    This option prevents users from running setuid programs that might exist in any workspaces.

  3. Add entries to the auto_ws map.

    The auto_ws map is organized so that each entry describes a subproject. Your first attempt yields a map that looks like the following:


    compiler   alpha:/export/ws/&
    windows    alpha:/export/ws/&
    files      bravo:/export/ws/&
    drivers    alpha:/export/ws/&
    man        bravo:/export/ws/&
    tools      delta:/export/ws/&

    The ampersand (&) at the end of each entry is an abbreviation for the entry key. For instance, the first entry is equivalent to:


    compiler		alpha:/export/ws/compiler 

    This first attempt provides a map that looks simple, but it turns out to be inadequate. The project organizer decides that the documentation in the man entry should be provided as a subdirectory under each subproject. Also, each subproject requires subdirectories to describe several versions of the software. You must assign each of these subdirectories to an entire disk partition on the server.

    Modify the entries in the map as follows:


    compiler \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /man        bravo:/export/ws/&/man
    windows \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    files \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /vers3.0    bravo:/export/ws/&/vers3.0 \
        /man        bravo:/export/ws/&/man
    drivers \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    tools \
        /           delta:/export/ws/&

    Although the map now appears to be much larger, it still contains only the five entries. Each entry is larger because it contains multiple mounts. For instance, a reference to /ws/compiler requires three mounts for the vers1.0, vers2.0, and man directories. The backslash at the end of each line tells autofs that the entry is continued onto the next line. In effect, the entry is one long line, though line breaks and some indenting have been used to make it more readable. The tools directory contains software development tools for all subprojects, so it is not subject to the same subdirectory structure. The tools directory continues to be a single mount.

    This arrangement provides the administrator with much flexibility. Software projects are notorious for consuming substantial amounts of disk space. Through the life of the project you might be required to relocate and expand various disk partitions. As long as these changes are reflected in the auto_ws map, the users do not need to be notified, as the directory hierarchy under /ws is not changed.

    Because the servers alpha and bravo view the same autofs map, any users who log in to these computers can find the /ws name space as expected. These users are provided with direct access to local files through loopback mounts instead of NFS mounts.

How to Set Up Different Architectures to Access a Shared Name Space

You need to assemble a shared name space for local executables, and applications, such as spreadsheet tools and word-processing packages. The clients of this name space use several different workstation architectures that require different executable formats. Also, some workstations are running different releases of the operating system.

  1. Create the auto_local map with the nistbladm command.

    See the Solaris Naming Administration Guide.

  2. Choose a single, site-specific name for the shared name space so that files and directories that belong to this space are easily identifiable.

    For example, if you choose /usr/local as the name, the path /usr/local/bin is obviously a part of this name space.

  3. For ease of user community recognition, create an autofs indirect map and mount it at /usr/local. Set up the following entry in the NIS+ (or NIS) auto_master map:


    /usr/local     auto_local     -ro

    Notice that the -ro mount option implies that clients will not be able to write to any files or directories.

  4. Export the appropriate directory on the server.

  5. Include a bin entry in the auto_local map.

    Your directory structure looks like this:


     bin     aa:/export/local/bin 

    To satisfy the need to serve clients of different architectures, references to the bin directory need to be directed to different directories on the server, depending on the clients' architecture type.

  6. To serve clients of different architectures, change the entry by adding the autofs CPU variable.


    bin     aa:/export/local/bin/$CPU 
    • For SPARC clients - Place executables in /export/local/bin/sparc

    • For IA clients - Place executables in /export/local/bin/i386

How to Support Incompatible Client Operating System Versions

  1. Combine the architecture type with a variable that determines the operating system type of the client.

    The autofs OSREL variable can be combined with the CPU variable to form a name that determines both CPU type and OS release.

  2. Create the following map entry.


    bin     aa:/export/local/bin/$CPU$OSREL

    For clients running version 5.6 of the operating system, export the following file systems:

    • For SPARC clients - Export /export/local/bin/sparc5.6

    • For IA clients - Place executables in /export/local/bin/i3865.6

How to Replicate Shared Files Across Several Servers

The best way to share replicated file systems that are read-only is to use failover. See "Client-Side Failover" for a discussion of failover.

  1. Become superuser.

  2. Modify the entry in the autofs maps.

    Create the list of all replica servers as a comma-separated list, such as:


    bin     aa,bb,cc,dd:/export/local/bin/$CPU
    

    Autofs chooses the nearest server. If a server has several network interfaces, list each interface. Autofs chooses the nearest interface to the client, avoiding unnecessary routing of NFS traffic.

How to Apply Security Restrictions

  1. Become superuser.

  2. Create the following entry in the name service auto_master file, either NIS or NIS+:


    /home     auto_home     -nosuid
    

    The nosuid option prevents users from creating files with the setuid or setgid bit set.

    This entry overrides the entry for /home in a generic local /etc/auto_master file (see the previous example) because the +auto_master reference to the external name service map occurs before the /home entry in the file. If the entries in the auto_home map include mount options, the nosuid option is overwritten, so either no options should be used in the auto_home map or the nosuid option must be included with each entry.


    Note -

    Do not mount the home directory disk partitions on or under /home on the server.


How to Use a Public File Handle With Autofs

  1. Become superuser.

  2. Create an entry in the autofs map like:


    /usr/local     -ro,public    bee:/export/share/local

    The public option forces the public handle to be used. If the NFS server does not support a public file handle, the mount will fail.

How to Use NFS URLs With Autofs

  1. Become superuser.

  2. Create an autofs entry like:


    /usr/local     -ro    nfs://bee/export/share/local

    The service tries to use the public file handle on the NFS server, but if the server does not support a public file handle, the MOUNT protocol is used.

Disabling Autofs Browsability

Starting with the Solaris 2.6 release, the default version of /etc/auto_master that is installed has the -nobrowse option added to the entries for /home and /net. In addition, the upgrade procedure adds the -nobrowse option to the /home and /net entries in /etc/auto_master if these entries have not been modified. However, it might be necessary to make these changes manually or to turn off browsability for site-specific autofs mount points after the installation.

You can turn off the browsability feature in several ways. Disable it using a command-line option to the automountd daemon, which completely disables autofs browsability for the client. Or disable it for each map entry on all clients using the autofs maps in either a NIS or NIS+ name space, or for each map entry on each client, using local autofs maps if no network-wide name space is being used.

How to Completely Disable Autofs Browsability on a Single NFS Client

  1. Become superuser.

  2. Add the -n option to the startup script.

    As root, edit the /etc/init.d/autofs script and add the -n option to the line that starts the automountd daemon:


    	/usr/lib/autofs/automountd -n \
    		< /dev/null > /dev/console 2>&1	# start daemon
  3. Restart the autofs service.


    # /etc/init.d/autofs stop
    # /etc/init.d/autofs start
    

How to Disable Autofs Browsability for All Clients

To disable browsability for all clients, you must employ a name service such as NIS or NIS+. Otherwise, you need to manually edit the automounter maps on each client. In this example, the browsability of the /home directory is disabled. You must follow this procedure for each indirect autofs node that needs to be disabled.

  1. Add the -nobrowse option to the /home entry in the name service auto_master file.


    /home     auto_home     -nobrowse 
    
  2. On all clients: run the automount command.

    The new behavior takes effect after running the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

How to Disable Autofs Browsability on an NFS Client

In this example, browsability of the /net directory is disabled. The same procedure can be used for /home or any other autofs mount points.

  1. Check the automount entry in /etc/nsswitch.conf.

    For local file entries to take precedence, the entry in the name service switch file should list files before the name service. For example:


    automount:  files nisplus

    This is the default configuration in a standard Solaris installation.

  2. Check the position of the +auto_master entry in /etc/auto_master.

    For additions to the local files to take precedence over the entries in the name space, the +auto_master entry must be moved below /net:


    # Master map for automounter
    #
    /net    -hosts     -nosuid
    /home   auto_home
    /xfn    -xfn
    +auto_master
    

    A standard configuration places the +auto_master entry at the top of the file. This prevents any local changes from being used.

  3. Add the -nobrowse option to the /net entry in the /etc/auto_master file.


    /net     -hosts     -nosuid,nobrowse 
    
  4. On all clients: run the automount command.

    The new behavior takes effect after running the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

Strategies for NFS Troubleshooting

When tracking down an NFS problem, keep in mind the main points of possible failure: the server, the client, and the network. The strategy outlined in this section tries to isolate each individual component to find the one that is not working. In all cases, the mountd and nfsd daemons must be running on the server for remote mounts to succeed.


Note -

The mountd and nfsd daemons start automatically at boot time only if NFS share entries are in the /etc/dfs/dfstab file. Therefore, mountd and nfsd must be started manually when setting up sharing for the first time.


The -intr option is set by default for all mounts. If a program hangs with a "server not responding" message, you can kill it with the keyboard interrupt Control-c.

When the network or server has problems, programs that access hard-mounted remote files fail differently than those that access soft-mounted remote files. Hard-mounted remote file systems cause the client's kernel to retry the requests until the server responds again. Soft-mounted remote file systems cause the client's system calls to return an error after trying for awhile. Because these errors can result in unexpected application errors and data corruption, avoid soft-mounting.

When a file system is hard mounted, a program that tries to access it hangs if the server fails to respond. In this case, the NFS system displays the following message on the console:


NFS server hostname not responding still trying

When the server finally responds, the following message appears on the console:


NFS server hostname ok

A program accessing a soft-mounted file system whose server is not responding generates the following message:


NFS operation failed for server hostname: error # (error_message)

Note -

Because of possible errors, do not soft-mount file systems with read-write data or file systems from which executables are run. Writable data could be corrupted if the application ignores the errors. Mounted executables might not load properly and can fail.


NFS Troubleshooting Procedures

To determine where the NFS service has failed, you need to follow several procedures to isolate the failure. Check for the following items:

In the process of checking these items, it might become apparent that other portions of the network are not functioning, such as the name service or the physical network hardware. The Solaris Naming Administration Guide contains debugging procedures for the NIS+ name service. Also, during the process it might become obvious that the problem isn't at the client end (for instance, if you get at least one trouble call from every subnet in your work area). In this case, it is much more timely to assume that the problem is the server or the network hardware near the server, and start the debugging process at the server, not at the client.

How to Check Connectivity on an NFS Client

  1. Check that the NFS server is reachable from the client. On the client, type the following command.


    % /usr/sbin/ping bee
    bee is alive

    If the command reports that the server is alive, remotely check the NFS server (see "How to Check the NFS Server Remotely").

  2. If the server is not reachable from the client, make sure that the local name service is running. For NIS+ clients type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  3. If the name service is running, make sure that the client has received the correct host information by typing the following:


    % /usr/bin/getent hosts bee
    129.144.83.117	bee.eng.acme.com
  4. If the host information is correct, but the server is not reachable from the client, run the ping command from another client.

    If the command run from a second client fails, see "How to Verify the NFS Service on the Server".

  5. If the server is reachable from the second client, use ping to check connectivity of the first client to other systems on the local net.

    If this fails, check the networking software configuration on the client (/etc/netmasks, /etc/nsswitch.conf, and so forth).

  6. If the software is correct, check the networking hardware.

    Try moving the client onto a second net drop.

How to Check the NFS Server Remotely

  1. Check that the NFS services have started on the NFS server by typing the following command:


    % rpcinfo -s bee|egrep 'nfs|mountd'
     100003  3,2    tcp,udp                          nfs     superuser
     100005  3,2,1  ticots,ticotsord,tcp,ticlts,udp  mountd  superuser

    If the daemons have not been started, see "How to Restart NFS Services".

  2. Check that the server's nfsd processes are responding. On the client, type the following command.


    % /usr/bin/rpcinfo -u bee nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers. Using the -t option tests the TCP connection. If this fails, skip to "How to Verify the NFS Service on the Server".

  3. Check that the server's mountd is responding, by typing the following command.


    % /usr/bin/rpcinfo -u bee mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting

    Using the -t option tests the TCP connection. If either attempt fails, skip to "How to Verify the NFS Service on the Server".

  4. Check the local autofs service if it is being used:


    % cd /net/wasp
    

    Choose a /net or /home mount point that you know should work properly. If this doesn't work, then as root on the client, type the following to restart the autofs service:


    # /etc/init.d/autofs stop
    # /etc/init.d/autofs start
    
  5. Verify that file system is shared as expected on the server.


    % /usr/sbin/showmount -e bee
    /usr/src										eng
    /export/share/man						(everyone)

    Check the entry on the server and the local mount entry for errors. Also check the name space. In this instance, if the first client is not in the eng netgroup, that client would not be able to mount the /usr/src file system.

    Check all entries that include mounting informtion in all of the local files. The list includes /etc/vfstab and all the /etc/auto_* files.

How to Verify the NFS Service on the Server

  1. Become superuser.

  2. Check that the server can reach the clients.


    # ping lilac
    lilac is alive
  3. If the client is not reachable from the server, make sure that the local name service is running. For NIS+ clients type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  4. If the name service is running, check the networking software configuration on the server (/etc/netmasks, /etc/nsswitch.conf, and so forth).

  5. Type the following command to check whether the nfsd daemon is running.


    # rpcinfo -u localhost nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    # ps -ef | grep nfsd
    root    232      1  0  Apr 07     ?     0:01 /usr/lib/nfs/nfsd -a 16
    root   3127   2462  1  09:32:57  pts/3  0:00 grep nfsd

    Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service (see "How to Restart NFS Services").

  6. Type the following command to check whether the mountd daemon is running.


    # /usr/bin/rpcinfo -u localhost mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting
    # ps -ef | grep mountd
    root    145      1 0 Apr 07  ?     21:57 /usr/lib/autofs/automountd
    root    234      1 0 Apr 07  ?     0:04  /usr/lib/nfs/mountd
    root   3084 2462 1 09:30:20 pts/3  0:00  grep mountd

    Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service (see "How to Restart NFS Services").

  7. Type the following command to check whether the rpcbind daemon is running.


    # /usr/bin/rpcinfo -u localhost rpcbind
    program 100000 version 1 ready and waiting
    program 100000 version 2 ready and waiting
    program 100000 version 3 ready and waiting

    If rpcbind seems to be hung, either reboot the server or follow the steps in "How to Warm-Start rpcbind".

How to Restart NFS Services

  1. Become superuser.

  2. To enable daemons without rebooting, type the following commands.


# /etc/init.d/nfs.server stop
# /etc/init.d/nfs.server start

This stops the daemons and restarts them, if there is an entry in /etc/dfs/dfstab.

How to Warm-Start rpcbind

If the NFS server cannot be rebooted because of work in progress, it is possible to restart rpcbind without having to restart all of the services that use RPC by completing a warm start as described in this procedure.

  1. Become superuser.

  2. Determine the PID for rpcbind.

    Run ps to get the PID (which is the value in the second column).


    # ps -ef |grep rpcbind
        root   115     1  0   May 31 ?        0:14 /usr/sbin/rpcbind
        root 13000  6944  0 11:11:15 pts/3    0:00 grep rpcbind
  3. Send a SIGTERM signal to the rpcbind process.

    In this example, term is the signal that is to be sent and 115 is the PID for the program (see the kill(1) man page). This causes rpcbind to create a list of the current registered services in /tmp/portmap.file and /tmp/rpcbind.file.


    # kill -s term 115
    

    Note -

    If you do not kill the rpcbind process with the -s term option, you cannot complete a warm start of rpcbind and must reboot the server to restore service.


  4. Restart rpcbind.

    Do a warm restart of the command so that the files created by the kill command are consulted, and the process resumes without requiring that all of the RPC services be restarted (see the rpcbind(1M) man page).


    # /usr/sbin/rpcbind -w
    

Identifying Which Host Is Providing NFS File Service

Run the nfsstat command with the -m option to gather current NFS information. The name of the current server is printed after "currserver=".


% nfsstat -m
/usr/local from bee,wasp:/export/share/local
 Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,synlink,
		acl,rsize=32768,wsize=32678,retrans=5
 Failover: noresponse=0, failover=0, remap=0, currserver=bee

How to Verify Options Used With the mount Command

In the Solaris 2.6 release and in any versions of the mount command that were patched after the 2.6 release, no warning is issued for invalid options. The following procedure helps determine whether the options that were supplied either on the command line or through /etc/vfstab were valid.

For this example, assume that the following command has been run:


# mount -F nfs -o ro,vers=2 bee:/export/share/local /mnt
  1. Verify the options, by running the following command.


    % nfsstat -m
    /mnt from bee:/export/share/local
    Flags:  vers=2,proto=tcp,sec=sys,hard,intr,dynamic,acl,rsize=8192,wsize=8192,
            retrans=5

    The file system from bee has been mounted with the protocol version set to 2. Unfortunately, the nfsstat command does not display information about all of the options, but using the nfsstat command is the most accurate way to verify the options.

  2. Check the entry in /etc/mnttab.

    The mount command does not allow invalid options to be added to the mount table, so verifying that the options listed in the file match those listed on the command line is a way to check those options not reported by the nfsstat command.


    # grep bee /etc/mnttab
    bee:/export/share/local /mnt nfs	ro,vers=2,dev=2b0005e 859934818

Troubleshooting Autofs

Occasionally, you might encounter problems with autofs. This section should make the problem-solving process easier. It is divided into two subsections.

This section presents a list of the error messages that autofs generates. The list is divided into two parts:

Each error message is followed by a description and probable cause of the message.

When troubleshooting, start the autofs programs with the verbose (-v) option; otherwise, you might experience problems without knowing why.

The following paragraphs are labeled with the error message you are likely to see if autofs fails, and a description of the possible problem.

Error Messages Generated by automount -v


bad key key in direct map mapname

While scanning a direct map, autofs has found an entry key without a prefixed /. Keys in direct maps must be full path names.


bad key key in indirect map mapname

While scanning an indirect map, autofs has found an entry key containing a /. Indirect map keys must be simple names--not path names.


can't mount server:pathname: reason

The mount daemon on the server refuses to provide a file handle for server:pathname. Check the export table on server.


couldn't create mount point mountpoint: reason

Autofs was unable to create a mount point required for a mount. This most frequently occurs when attempting to hierarchically mount all of a server's exported file systems. A required mount point can exist only in a file system that cannot be mounted (it cannot be exported) and it cannot be created because the exported parent file system is exported read-only.


leading space in map entry entry text in mapname

Autofs has discovered an entry in an automount map that contains leading spaces. This is usually an indication of an improperly continued map entry, for example:


fake
/blat   		frobz:/usr/frotz 

In this example, the warning is generated when autofs encounters the second line because the first line should be terminated with a backslash (\).


mapname: Not found

The required map cannot be located. This message is produced only when the -v option is used. Check the spelling and path name of the map name.


remount server:pathname on mountpoint: server not responding

Autofs has failed to remount a file system it previously unmounted.


WARNING: mountpoint already mounted on

Autofs is attempting to mount over an existing mount point. This means an internal error occurred in autofs (an anomaly).

Miscellaneous Error Messages


dir mountpoint must start with '/'

Automounter mount point must be given as full path name. Check the spelling and path name of the mount point.


hierarchical mountpoints: pathname1 and pathname2

Autofs does not allow its mount points to have a hierarchical relationship. An autofs mount point must not be contained within another automounted file system.


host server not responding

Autofs attempted to contact server, but received no response.


hostname: exports: rpc_err

Error getting export list from hostname. This indicates a server or network problem.


map mapname, key key: bad

The map entry is malformed, and autofs cannot interpret it. Recheck the entry; perhaps the entry has characters that need escaping.


mapname: nis_err

Error in looking up an entry in a NIS map. This can indicate NIS problems.


mount of server:pathname on mountpoint:reason

Autofs failed to do a mount. This can indicate a server or network problem.


mountpoint: Not a directory

Autofs cannot mount itself on mountpoint because it is not a directory. Check the spelling and path name of the mount point.


nfscast: cannot send packet: reason

Autofs cannot send a query packet to a server in a list of replicated file system locations.


nfscast: cannot receive reply: reason

Autofs cannot receive replies from any of the servers in a list of replicated file system locations.


nfscast: select: reason

All these error messages indicate problems attempting to ping servers for a replicated file system. This can indicate a network problem.


pathconf: no info for server:pathname

Autofs failed to get pathconf information for path name (see the fpathconf(2) man page).


pathconf: server: server not responding

Autofs is unable to contact the mount daemon on server that provides the information to pathconf().

Other Errors With Autofs

If the /etc/auto* files have the execute bit set, the automounter tries to execute the maps, which creates messages like:

/etc/auto_home: +auto_home: not found

In this case, the auto_home file has incorrect permissions. Each entry in the file will generate an error message much like this one. The permissions to the file should be reset by typing the following command:


# chmod 644 /etc/auto_home

NFS Error Messages

This section shows an error message followed by a description of the conditions that should create the error and at least one way of fixing the problem.



Bad argument specified with index option - must be a file

You must include a file name with the -index option. You cannot use directory names.


Cannot establish NFS service over /dev/tcp: transport setup problem

This message is often created when the services information in the name space has not been updated. It can also be reported for UDP. To fix this problem, you must update the services data in the name space. For NIS+ the entries should be:


nfsd nfsd tcp 2049 NFS server daemon
nfsd nfsd ucp 2049 NFS server daemon

For NIS and /etc/services, the entries should be:


nfsd    2049/tcp    nfs    # NFS server daemon
nfsd    2049/ucp    nfs    # NFS server daemon

Cannot use index option without public option

Include the public option with the index option in the share command. You must define the public file handle for the -index option to work.


Note -

The Solaris 2.5.1 release required that the public file handle be set using the share command. A change in the Solaris 2.6 release sets the public file handle to be / by default. This error message is no longer relevant.



Could not use public filehandle in request to server

This message is displayed if the public option is specified but the NFS server does not support the public file handle. In this case, the mount will fail. To remedy this situation, either try the mount request without using the public file handle or reconfigure the NFS server to support the public file handle.


NOTICE: NFS3: failing over from host1 to host2

This message is displayed on the console when a failover occurrs. It is an advisory message only.


filename: File too large

An NFS version 2 client is trying to access a file that is over 2 Gbytes.


mount: ... server not responding:RPC_PMAP_FAILURE - RPC_TIMED_OUT

The server sharing the file system you are trying to mount is down or unreachable, at the wrong run level, or its rpcbind is dead or hung.


mount: ... server not responding: RPC_PROG_NOT_REGISTERED

Mount registered with rpcbind, but the NFS mount daemon mountd is not registered.


mount: ... No such file or directory

Either the remote directory or the local directory does not exist. Check the spelling of the directory names. Run ls on both directories.


mount: ...: Permission denied

Your computer name might not be in the list of clients or netgroup allowed access to the file system you want to mount. Use showmount -e to verify the access list.


nfs mount: ignoring invalid option "-option"

The -option flag is not valid. Refer to the mount_nfs(1M) man page to verify the required syntax.


Note -

This error message is not displayed when running any version of the mount command included in a Solaris release from 2.6 to the current release or in earlier versions that have been patched.



nfs mount: NFS can't support "nolargefiles"

An NFS client has attempted to mount a file system from an NFS server using the -nolargefiles option. This option is not supported for NFS file system types.


nfs mount: NFS V2 can't support "largefiles"

The NFS version 2 protocol cannot handle large files. You must use version 3 if access to large files is required.


NFS server hostname not responding still trying

If programs hang while doing file-related work, your NFS server might be dead. This message indicates that NFS server hostname is down or that a problem has occurred with the server or the network. If failover is being used, hostname is a list of servers. Start with "How to Check Connectivity on an NFS Client".


NFS fsstat failed for server hostname: RPC: Authentication error

This error can be caused by many situations. One of the most difficult to debug is when this occurs because a user is in too many groups. Currently, a user can be in as many as 16 groups but no more if they are accessing files through NFS mounts. If a user must have the functionality of being in more than 16 groups and if at least the Solaris 2.5 release is running on the NFS server and the NFS clients, then use ACLs to provide the needed access privileges.


port number in nfs URL not the same as port number in port option

The port number included in the NFS URL must match the port number included with the -port option to mount. If the port numbers do not match, the mount will fail. Either change the command to make the port numbers the same or do not specify the port number that is incorrect. Usually, you do not need to specify the port number both in the NFS URL and with the -port option.


replicas must have the same version

For NFS failover to function properly, the NFS servers that are replicas must support the same version of the NFS protocol. Mixing version 2 and version 3 servers is not allowed.


replicated mounts must be read-only

NFS failover does not work on file systems that are mounted read-write. Mounting the file system read-write increases the likelihood that a file will change. NFS failover depends on the file systems being identical.


replicated mounts must not be soft

Replicated mounts require that you wait for a timeout before failover occurs. The soft option requires that the mount fail immediately when a timeout starts, so you cannot include the -soft option with a replicated mount.


share_nfs: Cannot share more than one filesystem with 'public' option

Check that the /etc/dfs/dfstab file has only one file system selected to be shared with the -public option. Only one public file handle can be established per server, so only one file system per server can be shared with this option.


WARNING: No network locking on hostname:path: contact admin to install server change

An NFS client has unsuccessfully attempted to establish a connection with the network lock manager on an NFS server. Rather than fail the mount, this warning is generated to warn you that locking will not work.