System Administration Guide: Resource Management and Network Services

Chapter 15 Remote File-System Administration (Tasks)

This chapter provides information on how to perform such NFS administration tasks as setting up NFS services, adding new file systems to share, and mounting file system. The chapter also covers the use of the Secure NFS system, and the use of WebNFS functionality. The last part of the chapter includes troubleshooting procedures and a list of some of the NFS error messages and their meanings.

Your responsibilities as an NFS administrator depend on your site's requirements and the role of your computer on the network. You might be responsible for all the computers on your local network, in which instance you might be responsible for determining these configuration items:

Maintaining a server after it has been set up involves the following tasks:

Remember, a computer can be both a server and a client—sharing local file systems with remote computers and mounting remote file systems.

Automatic File-System Sharing

Servers provide access to their file systems by sharing them over the NFS environment. You specify which file systems are to be shared with the share command or the /etc/dfs/dfstab file.

Entries in the /etc/dfs/dfstab file are shared automatically whenever you start NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. For example, if your computer is a server that supports home directories, you need to make the home directories available at all times. Most file-system sharing should be done automatically. The only time that manual sharing should occur is during testing or troubleshooting.

The dfstab file lists all the file systems that your server shares with its clients. This file also controls which clients can mount a file system. You can modify dfstab to add or delete a file system or change the way sharing is done. Just edit the file with any text editor that is supported (such as vi). The next time the computer enters run level 3, the system reads the updated dfstab to determine which file systems should be shared automatically.

Each line in the dfstab file consists of a share command—the same command you type at the command-line prompt to share the file system. The share command is located in /usr/sbin.

Table 15–1 File-System Sharing Task Map

Task 

Description 

For Instructions 

Establish automatic file-system sharing 

 Steps to configure a server so that file systems are automatically shared when the server is rebootedHow to Set Up Automatic File-System Sharing

Enable WebNFS 

 Steps to configure a server so that users can access files by using WebNFSHow to Enable WebNFS Access

Enable NFS server logging 

 Steps to configure a server so that NFS logging is run on selected file systemsHow to Enable NFS Server Logging

How to Set Up Automatic File-System Sharing

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Add entries for each file system to be shared.

    Edit /etc/dfs/dfstab and add one entry to the file for each file system that you want to be automatically shared. Each entry must be on a line by itself in the file and use this syntax:


    share [-F nfs] [-o specific-options] [-d description] pathname

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  3. Check if the NFS service is running on the server.

    If this is the first share command or set of share commands that you have initiated, the NFS service might not be running. Check that one of the NFS daemons is running by using the following command.


    # pgrep nfsd
    318

    318 is the process ID for nfsd in this example. If a ID is not displayed, it means that the service is not running. The second daemon to check for is mountd.

  4. (Optional) Start the NFS service.

    If the previous step does not report a process ID for nfsd, start the NFS service by using the following command.


    # /etc/init.d/nfs.server start
    

    This command ensures that NFS service is now running on the servers and restarts automatically when the server is at run level 3 during boot.

  5. (Optional) Share the file system.

    After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS service was started earlier, this command does not need to be run because the init script runs the command.


    # shareall
    
  6. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public  ""

Where to Go From Here

The next step is to set up your autofs maps so that clients can access the file systems you have shared on the server. See Autofs Administration Task Overview.

How to Enable WebNFS Access

Starting with the 2.6 release, by default all file systems that are available for NFS mounting are automatically available for WebNFS access. The only condition that requires the use of this procedure is one of the following:

See Planning for WebNFS Access for a list of issues that you should consider before starting the WebNFS service.

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Add entries for each file system to be shared by using the WebNFS service.

    Edit /etc/dfs/dfstab and add one entry to the file for each file system. The public and index tag that are shown in the following example are optional.


    share -F nfs -o ro,public,index=index.html /export/ftp

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  3. Check if the NFS service is running on the server.

    If this is the first share command or set of share commands that you have initiated, the NFS daemons might not be running. Check to see that one of the NFS daemons is running by using the following command.


    # pgrep nfsd
    318

    318 is the process ID for nfsd in this example. If an ID is not displayed, it means that the service is not running. The second daemon to check for is mountd.

  4. (Optional) Start the NFS service.

    If the previous step does not report a process ID for nfsd, start the NFS service by using the following command.


    # /etc/init.d/nfs.server start
    

    This command ensures that NFS service is now running on the servers and restarts automatically when the server is at run level 3 during boot.

  5. (Optional) Share the file system.

    After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS service was started earlier, this command does not need to be run because the script runs the command.


    # shareall
    
  6. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public,index=index.html  ""

How to Enable NFS Server Logging

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Optional: Change file system configuration settings.

    In /etc/nfs/nfslog.conf, you can either edit the default settings for all file systems by changing the data associated with the global tag or you can add a new tag for this file system. If these changes are not needed, you do not need to change this file. The format of /etc/nfs/nfslog.conf is described in nfslog.conf(4).

  3. Add entries for each file system to be shared by using NFS server logging.

    Edit /etc/dfs/dfstab and add one entry to the file for the file system on which you are enabling NFS server logging. The tag that is used with the log=tag option must be entered in /etc/nfs/nfslog.conf. This example uses the default settings in the global tag.


    share -F nfs -o ro,log=global /export/ftp

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  4. Check if the NFS service is running on the server.

    If this is the first share command or set of share commands that you have initiated, the NFS daemons might not be running. Check that one of the NFS daemons is running by using the following command.


    # pgrep nfsd
    318

    318 is the process ID for nfsd in this example. If a ID is not displayed, it means that the service is not running. The second daemon to check for is mountd.

  5. (Optional) Start the NFS service.

    If the previous step does not report a process ID for nfsd, start the NFS service by using the following command.


    # /etc/init.d/nfs.server start
    

    This command ensures that NFS service is now running on the servers and restarts automatically when the server is at run level 3 during boot.

  6. (Optional) Share the file system.

    After the entry is in /etc/dfs/dfstab, the file system can be shared by either rebooting the system or by using the shareall command. If the NFS service was started earlier, this command does not need to be run because the script runs the command.


    # shareall
    
  7. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,log=global  ""
  8. (Optional) Start the NFS log daemon, nfslogd, if it is not running already.

    Restarting the NFS daemons by using the nfs.server script starts the daemon if the nfslog.conf file exists. Otherwise, the command needs to be run once by hand to create the files so that the command automatically restarts when the server is rebooted.


    # /usr/lib/nfs/nfslogd
    

Mounting File Systems

You can mount file systems in several ways. They can be mounted automatically when the system is booted, on demand from the command line, or through the automounter. The automounter provides many advantages to mounting at boot time or mounting from the command line, but many situations require a combination of all three methods. In addition to these three ways of mounting a file system, several ways of enabling or disabling processes exist, depending on the options you use when mounting the file system. See the following table for a complete list of the tasks that are associated with file system mounting.

Table 15–2 Mounting File Systems Task Map

Task 

Description 

For Instructions 

Mount a file system at boot time 

 Steps so that a file system is mounted whenever a system is rebooted.How to Mount a File System at Boot Time

Mount a file system by using a command 

 Steps to mount a file system when a system is running. This procedure is useful when testing.How to Mount a File System From the Command Line

Mount with the automounter 

 Steps to access a file system on demand without using the command line.Mounting With the Automounter

Prevent large files 

 Steps to prevent large files from being created on a file system.How to Disable Large Files on an NFS Server

Start client-side failover 

 Steps to enable the automatic switchover to a working file system if a server fails.How to Use Client-Side Failover

Disable mount access for a client 

 Steps to disable the ability of one client to access a remote file system.How to Disable Mount Access for One Client

Provide access to a file system through a firewall 

 Steps to allow access to a file system through a firewall by using the WebNFS protocol.How to Mount an NFS File System Through a Firewall

Mount a file system by using a NFS URL 

 Steps to allow access to a file system using an NFS URL. This process allows for file-system access without using the MOUNT protocol.How to Mount an NFS File System Using an NFS URL

How to Mount a File System at Boot Time

If you want to mount file systems at boot time instead of using autofs maps, follow this procedure. Although you must follow this procedure for all local file systems, do not use it for remote file systems because it must be completed on every client.

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Add an entry for the file system to /etc/vfstab.

Entries in the /etc/vfstab file have the following syntax:

special  fsckdev  mountp  fstype  fsckpass  mount-at-boot  mntopts

See the vfstab(4) man page for more information.


Caution – Caution –

NFS servers should not have NFS vfstab entries because of a potential deadlock. The NFS service is started after the entries in /etc/vfstab are checked. As a result, if two servers that are mounting file systems from each other fail at the same time, each system could hang as the systems reboot.


Example-vfstab entry

You want a client computer to mount the /var/mail directory from the server wasp. You want the file system to be mounted as /var/mail on the client and you want the client to have read-write access. Add the following entry to the client's vfstab file.


wasp:/var/mail - /var/mail nfs - yes rw

How to Mount a File System From the Command Line

Mounting a file system from the command line is often done to test a new mount point. This type of mount allows for temporary access to a file system that is not available through the automounter.

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Mount the file system.

    Type the following command:


    # mount -F nfs -o ro bee:/export/share/local /mnt
    

    In this instance, the /export/share/local file system from the server bee is mounted on read-only /mnt on the local system. Mounting from the command line allows for temporary viewing of the file system. You can unmount the file system with umount or by rebooting the local host.


    Caution – Caution –

    Starting with the 2.6 release, all versions of the mount command do not warn about invalid options. The command silently ignores any options that cannot be interpreted. To prevent unexpected behavior, ensure that you verify all of the options that were used, to prevent unexpected behavior.


Mounting With the Automounter

Autofs Administration Task Overview includes the specific instructions for establishing and supporting mounts with the automounter. Without any changes to the generic system, clients should be able to access remote file systems through the /net mount point. To mount the /export/share/local file system from the previous example, all you need to do is type the following:


% cd /net/bee/export/share/local

Because the automounter allows all users to mount file systems, root access is not required. The automounter also provides for automatic unmounting of file systems, so you do not need to unmount file systems after you are finished.

How to Disable Large Files on an NFS Server

For servers that are supporting clients that cannot handle a file over 2 GBytes, you might need to disable the ability to create large files.


Note –

Previous versions of the Solaris operating environment cannot use large files. Check that clients of the NFS server are running at minimum the 2.6 release if the clients need to access large files.


  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Check that no large files exist on the file system.

    Here is an example of a command that you can run to locate large files:


    # cd /export/home1
    # find . -xdev -size +2000000 -exec ls -l {} \;
    

    If large files are on the file system, you must remove or move them to another file system.

  3. Unmount the file system.


    # umount /export/home1
    
  4. Reset the file system state if the file system has been mounted by using largefiles.

    fsck resets the file system state if no large files exist on the file system:


    # fsck /export/home1
    
  5. Mount the file system by using nolargefiles.


    # mount -F ufs -o nolargefiles /export/home1
    

    You can mount from the command line, but to make the option more permanent, add an entry that resembles the following into /etc/vfstab:


    /dev/dsk/c0t3d0s1 /dev/rdsk/c0t3d0s1 /export/home1  ufs  2  yes  nolargefiles

How to Use Client-Side Failover

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. On the NFS client, mount the file system by using the ro option.

    You can mount from the command line, through the automounter, or by adding an entry to /etc/vfstab that resembles the following:


    bee,wasp:/export/share/local  -  /usr/local  nfs  -  no  -o ro

    This syntax has been allowed by the automounter in earlier releases, but the failover was not available while file systems were mounted, only when a server was being selected.


    Note –

    Servers that are running different versions of the NFS protocol cannot be mixed by using a command line or in a vfstab entry. Mixing servers that support NFS V2 and V3 protocols can only be done with autofs. In autofs, the best subset of version 2 or version 3 servers is used.


How to Disable Mount Access for One Client

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Add an entry in /etc/dfs/dfstab.

    The first example allows mount access to all clients in the eng netgroup except the host that is named rose. The second example allows mount access to all clients in the eng.example.com DNS domain except for rose.


    share -F nfs -o ro=-rose:eng /export/share/man
    share -F nfs -o ro=-rose:.eng.example.com /export/share/man

    For additional information on access lists, see Setting Access Lists With the share Command. For a description of /etc/dfs/dfstab, see dfstab(4).

  3. Share the file system.

    The NFS server does not use changes to /etc/dfs/dfstab until the file systems are shared again or until the server is rebooted.


    # shareall

How to Mount an NFS File System Through a Firewall

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Manually mount the file system, by using a command such as the following:


    # mount -F nfs -o public bee:/export/share/local /mnt
    

    In this example, the file system /export/share/local is mounted on the local client by using the public file handle. An NFS URL can be used instead of the standard path name. If the public file handle is not supported by the server bee, the mount operation fails.


    Note –

    This procedure requires that the file system on the NFS server be shared by using the public option. Additionally, any firewalls between the client and the server must allow TCP connections on port 2049. Starting with the 2.6 release, all file systems that are shared allow for public file handle access, so the public option is applied by default.


How to Mount an NFS File System Using an NFS URL

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Manually mount the file system, by using a command such as the following:


    # mount -F nfs nfs://bee:3000/export/share/local /mnt
    

    In this example, the /export/share/local file system is being mounted from the server bee by using NFS port number 3000. The port number is not required and by default uses the standard NFS port number of 2049. You can choose to include the public option with an NFS URL. Without the public option, the MOUNT protocol is used if the public file handle is not supported by the server. The public option forces the use of the public file handle, and the mount fails if the public file handle is not supported.

Setting Up NFS Services

This section describes some of the tasks necessary to initialize or use NFS services.

Table 15–3 NFS Services Task Map

Task 

Description 

For Instructions 

Start the NFS server 

 Steps to start the NFS service, if it has not been started automatically.How to Start the NFS Services

Stop the NFS server 

 Steps to stop the NFS service. Normally the service should not need to be stopped.How to Stop the NFS Services

Start the automounter 

 Steps to start the automounter. This procedure is required when some of the automounter maps are changed.How to Start the Automounter

Stop the automounter 

 Steps to stop the automounter. This procedure is required when some of the automounter maps are changed.How to Stop the Automounter

How to Start the NFS Services

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Enable the NFS service daemons.

    Type the following command:


    # /etc/init.d/nfs.server start
    

    This command starts the daemons if an entry is in /etc/dfs/dfstab.

How to Stop the NFS Services

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Disable the NFS service daemons.

    Type the following command:


    # /etc/init.d/nfs.server stop
    

How to Start the Automounter

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Enable the autofs daemon.

    Type the following command:


    # /etc/init.d/autofs start
    

How to Stop the Automounter

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Disable the autofs daemon.

    Type the following command:


    # /etc/init.d/autofs stop
    

Administering the Secure NFS System

To use the Secure NFS system, all the computers you are responsible for must have a domain name. A domain is an administrative entity, typically consisting of several computers, that is part of a larger network. If you are running a name service, you should also establish the name service for the domain. See System Administration Guide: Naming and Directory Services (FNS and NIS+).

You can configure the Secure NFS environment to use Diffie-Hellman authentication.“Using Authentication Services (Tasks)” in System Administration Guide: Security Services discusses this authentication service.

Kerberos V5 authentication is also supported by the NFS service. “Introduction to SEAM” in System Administration Guide: Security Services discusses the Kerberos service.

How to Set Up a Secure NFS Environment With DH Authentication

  1. Assign your domain a domain name, and make the domain name known to each computer in the domain.

    See the System Administration Guide: Naming and Directory Services (FNS and NIS+) if you are using NIS+ as your name service.

  2. Establish public keys and secret keys for your clients' users by using the newkey or nisaddcred command. Have each user establish his or her own secure RPC password by using the chkey command.


    Note –

    For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.


    When public keys and secret keys have been generated, the public keys and encrypted secret keys are stored in the publickey database.

  3. Verify that the name service is responding. If you are running NIS+, type the following:


    # nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995

    If you are running NIS, verify that the ypbind daemon is running.

  4. Verify that the keyserv daemon (the key server) is running.

    Type the following command.


    # ps -ef | grep keyserv
    root    100     1  16    Apr 11 ?      0:00 /usr/sbin/keyserv
    root	  2215  2211   5  09:57:28 pts/0  0:00 grep keyserv

    If the daemon isn't running, start the key server by typing the following:


    # /usr/sbin/keyserv
    
  5. Decrypt and store the secret key.

    Usually, the login password is identical to the network password. In this situation, keylogin is not required. If the passwords are different, the users have to log in, and then do a keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.


    Note –

    You need to run keylogin -r if the root secret key changes or /etc/.rootkey is lost.


  6. Update mount options for the file system.

    Edit the /etc/dfs/dfstab file and add the sec=dh option to the appropriate entries (for Diffie-Hellman authentication).


    share -F nfs -o sec=dh /export/home
    

    See the dfstab(4) man page for a description of /etc/dfs/dfstab.

  7. Update the automounter maps for the file system.

    Edit the auto_master data to include sec=dh as a mount option in the appropriate entries (for Diffie-Hellman authentication):


    /home	auto_home	-nosuid,sec=dh

    Note –

    Releases through Solaris 2.5 have a limitation. If a client does not mount as secure a file system that is shared as secure, users have access as user nobody, rather than as themselves. With version 2 on later releases, the NFS server refuses access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode is inherited from the NFS server, so clients do not need to specify sec=dh. The users have access to the files as themselves.


    When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you do not establish new keys or change them for root. If you do delete /etc/.rootkey, you can always type the following:


    # keylogin -r
    

WebNFS Administration Tasks

This section provides instructions for administering the WebNFS system. This is a list of some related tasks.

Table 15–4 WebNFS Administration Task Map

Task 

Description 

For Instructions 

Plan for WebNFS 

 Issues to consider before enabling the WebNFS service.Planning for WebNFS Access

Enable WebNFS 

 Steps to enable mounting of an NFS file system by using the WebNFS protocol.How to Enable WebNFS Access

Enable WebNFS through a firewall 

 Steps to allow access to files through a firewall by using the WebNFS protocol.How to Enable WebNFS Access Through a Firewall

Browse by using an NFS URL 

 Instructions for using an NFS URL within a web browser.How to Browse Using an NFS URL

Use a public file handle with autofs 

 Steps to force use of the public file handle when mounting a file system with the automounter.How to Use a Public File Handle With Autofs

Use an NFS URL with autofs 

 Steps to add an NFS URL to the automounter maps.How to Use NFS URLs With Autofs

Provide access to a file system through a firewall 

 Steps to allow access to a file system through a firewall by using the WebNFS protocol.How to Mount an NFS File System Through a Firewall

Mount a file system by using an NFS URL 

 Steps to allow access to a file system by using an NFS URL. This process allows for file system access without using the MOUNT protocol.How to Mount an NFS File System Using an NFS URL

Planning for WebNFS Access

To use the WebNFS functionality, you first need an application capable of running and loading an NFS URL (for example, nfs://server/path). The next step is to choose the file system that will be exported for WebNFS access. If the application is web browsing, often the document root for the web server is used. You need to consider several factors when choosing a file system to export for WebNFS access.

  1. Each server has one public file handle that by default is associated with the server's root file system. The path in an NFS URL is evaluated relative to the directory with which the public file handle is associated. If the path leads to a file or directory within an exported file system, the server provides access. You can use the public option of the share command to associate the public file handle with a specific exported directory. Using this option allows URLs to be relative to the shared file system rather than to the servers' root file system. The root file system does not allow web access unless the root file system is shared.

  2. The WebNFS environment enables users who already have mount privileges to access files through a browser regardless of whether the file system is exported by using the public option. Because users already have access to these files through the NFS setup, this access should not create any additional security risk. You only need to share a file system by using the public option if users who cannot mount the file system need to use WebNFS access.

  3. File systems that are already open to the public make good candidates for using the public option. Some examples are top directory in an ftp archive or the main URL directory for a web site.

  4. You can use the index option with the share command to force the loading of an HTML file instead of listing the directory when an NFS URL is accessed.

    After a file system is chosen, review the files and set access permissions to restrict viewing of files or directories, as needed. Establish the permissions, as appropriate, for any NFS file system that is being shared. For many sites, 755 permissions for directories and 644 permissions for files provides the correct level of access.

    You need to consider additional factors if both NFS and HTTP URLs are to be used to access one web site. These factors are described in WebNFS Limitations With Web Browser Use.

How to Browse Using an NFS URL

Browsers capable of supporting the WebNFS service should provide access to an NFS URL that resembles the following:


nfs://server<:port>/path

server

Name of the file server 

port

Port number to use (the default value is 2049)

path

Path to file, which can be relative to the public file handle or to the root file system 


Note –

In most browsers, the URL service type (for example, nfs or http) is remembered from one transaction to the next. The exception occurs when a URL that includes a different service type is loaded. After you use an NFS URL, a reference to an HTTP URL might be loaded. If so, subsequent pages are loaded by using the HTTP protocol instead of the NFS protocol.


How to Enable WebNFS Access Through a Firewall

You can enable WebNFS access for clients that are not part of the local subnet by configuring the firewall to allow a TCP connection on port 2049. Just allowing access for httpd does not allow NFS URLs to be used.

Autofs Administration Task Overview

This section describes some of the most common tasks you might encounter in your own environment. Recommended procedures are included for each scenario to help you configure autofs to best meet your clients' needs.


Note –

Use the Solstice System Management Tools or see the System Administration Guide: Naming and Directory Services (FNS and NIS+) to perform the tasks that are discussed in this section.


Autofs Administration Task Map

The following table provides a description and a pointer to many of the tasks that are related to autofs.

Table 15–5 Autofs Administration Task Map

Task 

Description 

For Instructions 

Start autofs 

 Start the automount service without having to reboot the systemHow to Start the Automounter

Stop autofs 

 Stop the automount service without disabling other network servicesHow to Stop the Automounter

Access file systems by using autofs 

 Access file systems by using the automount serviceMounting With the Automounter

Modify the autofs maps 

 Steps to modify the master map, which should be used to list other mapsHow to Modify the Master Map

 

 Steps to modify an indirect map, which should be used for most mapsHow to Modify Indirect Maps

 

 Steps to modify a direct map, which should be used when a direct association between a mount point on a client and a server is requiredHow to Modify Direct Maps

Modify the autofs maps to access non-NFS file systems 

 Steps to set up an autofs map with an entry for a CD-ROM applicationHow to Access CD-ROM Applications With Autofs

 

 Steps to set up an autofs map with an entry for a PC-DOS diskette How to Access PC-DOS Data Diskettes With Autofs

 

 Steps to use autofs to access a CacheFS file system How to Access NFS File Systems Using CacheFS

Using /home

Example of how to set up a common /home mapSetting Up a Common View of /home

 

Steps to set up a /home map that refers to multiple file systemsHow to Set Up /home With Multiple Home Directory File Systems

Using a new autofs mount point 

 Steps to set up a project-related autofs mapHow to Consolidate Project-Related Files Under /ws

 

 Steps to set up an autofs map that supports different client architecturesHow to Set Up Different Architectures to Access a Shared Name Space

 

 Steps to set up an autofs map that supports different operating systemsHow to Support Incompatible Client Operating System Versions

Replicate file systems with autofs 

 Provide access to file systems that fail overHow to Replicate Shared Files Across Several Servers

Using security restrictions with autofs 

Provide access to file systems while restricting remote root access to the filesHow to Apply Autofs Security Restrictions

Using a public file handle with autofs 

 Force use of the public file handle when mounting a file systemHow to Use a Public File Handle With Autofs

Using an NFS URL with autofs 

 Add an NFS URL so that the automounter can use itHow to Use NFS URLs With Autofs

Disable autofs browsability 

 Steps to disable browsability so that autofs mount points are not automatically populated on a single clientHow to Completely Disable Autofs Browsability on a Single NFS Client

 

 Steps to disable browsability so that autofs mount points are not automatically populated on all clientsHow to Disable Autofs Browsability for All Clients

 

 Steps to disable browsability so that a specific autofs mount point is not automatically populated on a clientHow to Disable Autofs Browsability on a Selected File System

Administrative Tasks Involving Maps

The following tables describe several of the factors you need to be aware of when administering autofs maps. Which type of map and which name service you choose change the mechanism that you need to use to make changes to the autofs maps.

The following table describes the types of maps and their uses.

Table 15–6 Types of autofs Maps and Their Uses

Type of Map 

Use 

Master

Associates a directory with a map 

Direct

Directs autofs to specific file systems 

Indirect

Directs autofs to reference-oriented file systems 

The following table describes how to make changes to your autofs environment, based on your name service.

Table 15–7 Map Maintenance

Name Service 

Method 

Local files 

Text editor

NIS 

make files

NIS+ 

nistbladm

The next table tells you when to run the automount command, depending on the modification you have made to the type of map. For example, if you have made an addition or a deletion to a direct map, you need to run the automount command on the local system to allow the change to become effective. However, if you've modified an existing entry, you do not need to run the automount command for the change to become effective.

Table 15–8 When to Run the automount Command

Type of Map 

Restart automount?

 

Addition or Deletion 

Modification 

auto_master

Y

Y

direct

Y

N

indirect

N

N

Modifying the Maps

The following procedures require that you use NIS+ as your name service.

How to Modify the Master Map

  1. Login as a user who has permissions to change the maps.

  2. Using the nistbladm command, make your changes to the master map.

    See the System Administration Guide: Naming and Directory Services (FNS and NIS+).

  3. For each client, become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  4. For each client, run the automount command to ensure your changes become effective.

  5. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers.

The automount command gathers information from the master map whenever it is run.

How to Modify Indirect Maps

    Login as a user who has permissions to change the maps.

    Using the nistbladm command, make your changes to the indirect map.

    See the System Administration Guide: Naming and Directory Services (FNS and NIS+).

The change becomes effective the next time the map is used, which is the next time a mount is done.

How to Modify Direct Maps

  1. Login as a user who has permissions to change the maps.

  2. Using the nistbladm command, add or delete your changes to the direct map.

    See the System Administration Guide: Naming and Directory Services (FNS and NIS+).

  3. If you added or deleted a mount-point entry in step 1, run the automount command.

  4. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers.


    Note –

    If you only modify or change the contents of an existing direct map entry, you do not need to run the automount command.


    For example, suppose you modify the auto_direct map so that the /usr/src directory is now mounted from a different server. If /usr/src is not mounted at this time, the new entry becomes effective immediately when you try to access /usr/src. If /usr/src is mounted now, you can wait until the auto-unmounting occurs, then access it.


    Note –

    Because of the additional steps, and because they do not occupy as much space in the mount table as direct maps, use indirect maps whenever possible. Indirect maps are easier to construct and less demanding on the computers' file systems.


Avoiding Mount-Point Conflicts

If you have a local disk partition that is mounted on /src and you plan to use the autofs service to mount other source directories, you might encounter a problem. If you specify the mount point /src, the NFS service hides the local partition whenever you try to reach it.

You need to mount the partition in some other location, for example, on /export/src. You then need an entry in /etc/vfstab such as the following:


/dev/dsk/d0t3d0s5 /dev/rdsk/c0t3d0s5 /export/src ufs 3 yes - 

You also need this entry in auto_src:


terra		terra:/export/src 

terra is the name of the computer.

Accessing Non-NFS File Systems

Autofs can also mount files other than NFS files. Autofs mounts files on removable media, such as diskettes or CD-ROM. Normally, you would mount files on removable media by using the Volume Manager. The following examples show how this mounting could be accomplished through autofs. The Volume Manager and autofs do not work together, so these entries would not be used without first deactivating the Volume Manager.

Instead of mounting a file system from a server, you put the media in the drive and reference it from the map. If you plan to access non-NFS file systems and you are using autofs, see the following procedures.

How to Access CD-ROM Applications With Autofs


Note –

Use this procedure if you are not using Volume Manager.


  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Update the autofs map.

    Add an entry for the CD-ROM file system, which should resemble the following:


    hsfs     -fstype=hsfs,ro     :/dev/sr0

    The CD-ROM device you intend to mount must appear as a name that follows the colon.

How to Access PC-DOS Data Diskettes With Autofs


Note –

Use this procedure if you are not using Volume Manager.


  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Update the autofs map.

    Add an entry for the diskette file system such as the following:


     pcfs     -fstype=pcfs     :/dev/diskette

Accessing NFS File Systems Using CacheFS

The cache file system (CacheFS) is a generic nonvolatile caching mechanism that improves the performance of certain file systems by utilizing a small, fast, local disk.

You can improve the performance of the NFS environment by using CacheFS to cache data from an NFS file system on a local disk.

How to Access NFS File Systems Using CacheFS

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Run the cfsadmin command to create a cache directory on the local disk.


    # cfsadmin -c /var/cache
    
  3. Add the cachefs entry to the appropriate automounter map.

    For example, adding this entry to the master map caches all home directories:


    /home auto_home -fstype=cachefs,cachedir=/var/cache,backfstype=nfs

    Adding this entry to the auto_home map only caches the home directory for the user who is named rich:


    rich -fstype=cachefs,cachedir=/var/cache,backfstype=nfs dragon:/export/home1/rich

    Note –

    Options that are included in maps that are searched later override options set in maps that are searched earlier. The last options that are found are the ones that are used. In the previous example, a specific entry added to the auto_home map only needs to include the options listed in the master maps if some of the options needed to be changed.


Customizing the Automounter

You can set up the automounter maps in several ways. The following tasks give details on how to customize the automounter maps to provide an easy-to-use directory structure.

Setting Up a Common View of /home

The ideal is for all network users to be able to locate their own or anyone's home directory under /home. This view should be common across all computers, whether client or server.

Every Solaris installation comes with a master map: /etc/auto_master.


# Master map for autofs
#
+auto_master
/net     -hosts     -nosuid,nobrowse
/home    auto_home  -nobrowse
/xfn     -xfn

A map for auto_home is also installed under /etc.


# Home directory map for autofs
#
+auto_home

Except for a reference to an external auto_home map, this map is empty. If the directories under /home are to be common to all computers, do not modify this /etc/auto_home map. All home directory entries should appear in the name service files, either NIS or NIS+.


Note –

Users should not be permitted to run setuid executables from their home directories. Without this restriction, any user could have superuser privileges on any computer.


How to Set Up /home With Multiple Home Directory File Systems

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Install home directory partitions under /export/home.

    If the system has several partitions, install them under separate directories, for example, /export/home1, /export/home2, and so on.

  3. Use the Solstice System Management Tools to create and maintain the auto_home map.

    Whenever you create a new user account, type the location of the user's home directory in the auto_home map. Map entries can be simple, for example:


    rusty        dragon:/export/home1/&
    gwenda       dragon:/export/home1/&
    charles      sundog:/export/home2/&
    rich         dragon:/export/home3/&

    Notice the use of the & (ampersand) to substitute the map key. The ampersand is an abbreviation for the second occurrence of rusty in the following example.


    rusty     	dragon:/export/home1/rusty

    With the auto_home map in place, users can refer to any home directory (including their own) with the path /home/user. user is their login name and the key in the map. This common view of all home directories is valuable when logging in to another user's computer. Autofs mounts your home directory for you. Similarly, if you run a remote windowing system client on another computer, the client program has the same view of the /home directory.

    This common view also extends to the server. Using the previous example, if rusty logs in to the server dragon, autofs there provides direct access to the local disk by loopback-mounting /export/home1/rusty onto /home/rusty.

    Users do not need to be aware of the real location of their home directories. If rusty needs more disk space and needs to have his home directory relocated to another server, you need only change rusty's entry in the auto_home map to reflect the new location. Other users can continue to use the /home/rusty path.

How to Consolidate Project-Related Files Under /ws

Assume you are the administrator of a large software development project. You plan to make all project-related files available under a directory that is called /ws. This directory is to be common across all workstations at the site.

  1. Add an entry for the /ws directory to the site auto_master map, either NIS or NIS+.


    /ws     auto_ws     -nosuid 

    The auto_ws map determines the contents of the /ws directory.

  2. Add the -nosuid option as a precaution.

    This option prevents users from running setuid programs that might exist in any workspaces.

  3. Add entries to the auto_ws map.

    The auto_ws map is organized so that each entry describes a subproject. Your first attempt yields a map that resembles the following:


    compiler   alpha:/export/ws/&
    windows    alpha:/export/ws/&
    files      bravo:/export/ws/&
    drivers    alpha:/export/ws/&
    man        bravo:/export/ws/&
    tools      delta:/export/ws/&

    The ampersand (&) at the end of each entry is an abbreviation for the entry key. For instance, the first entry is equivalent to:


    compiler		alpha:/export/ws/compiler 

    This first attempt provides a map that appears simple, but it is inadequate. The project organizer decides that the documentation in the man entry should be provided as a subdirectory under each subproject. Also, each subproject requires subdirectories to describe several versions of the software. You must assign each of these subdirectories to an entire disk partition on the server.

    Modify the entries in the map as follows:


    compiler \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /man        bravo:/export/ws/&/man
    windows \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    files \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /vers3.0    bravo:/export/ws/&/vers3.0 \
        /man        bravo:/export/ws/&/man
    drivers \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    tools \
        /           delta:/export/ws/&

    Although the map now appears to be much larger, it still contains only the five entries. Each entry is larger because it contains multiple mounts. For instance, a reference to /ws/compiler requires three mounts for the vers1.0, vers2.0, and man directories. The backslash at the end of each line informs autofs that the entry is continued onto the next line. Effectively, the entry is one long line, though line breaks and some indenting have been used to make it more readable. The tools directory contains software development tools for all subprojects, so it is not subject to the same subdirectory structure. The tools directory continues to be a single mount.

    This arrangement provides the administrator with much flexibility. Software projects are notorious for consuming substantial amounts of disk space. Through the life of the project you might be required to relocate and expand various disk partitions. If these changes are reflected in the auto_ws map, the users do not need to be notified, as the directory hierarchy under /ws is not changed.

    Because the servers alpha and bravo view the same autofs map, any users who log in to these computers can find the /ws name space as expected. These users are provided with direct access to local files through loopback mounts instead of NFS mounts.

How to Set Up Different Architectures to Access a Shared Name Space

You need to assemble a shared name space for local executables, and applications, such as spreadsheet applications and word-processing packages. The clients of this name space use several different workstation architectures that require different executable formats. Also, some workstations are running different releases of the operating system.

  1. Create the auto_local map with the nistbladm command.

    See the System Administration Guide: Naming and Directory Services (FNS and NIS+).

  2. Choose a single, site-specific name for the shared name space so that files and directories that belong to this space are easily identifiable.

    For example, if you choose /usr/local as the name, the path /usr/local/bin is obviously a part of this name space.

  3. For ease of user community recognition, create an autofs indirect map and mount it at /usr/local. Set up the following entry in the NIS+ (or NIS) auto_master map:


    /usr/local     auto_local     -ro

    Notice that the -ro mount option implies that clients cannot write to any files or directories.

  4. Export the appropriate directory on the server.

  5. Include a bin entry in the auto_local map.

    Your directory structure resembles the following:


     bin     aa:/export/local/bin 
  6. (Optional) To serve clients of different architectures, change the entry by adding the autofs CPU variable.


    bin     aa:/export/local/bin/$CPU 
    • For SPARC clients – Place executables in /export/local/bin/sparc.

    • For IA clients – Place executables in /export/local/bin/i386.

How to Support Incompatible Client Operating System Versions

  1. Combine the architecture type with a variable that determines the operating system type of the client.

    You can combine the autofs OSREL variable with the CPU variable to form a name that determines both CPU type and OS release.

  2. Create the following map entry.


    bin     aa:/export/local/bin/$CPU$OSREL

    For clients running version 5.6 of the operating system, export the following file systems:

    • For SPARC clients – Export /export/local/bin/sparc5.6.

    • For IA clients – Place executables in /export/local/bin/i3865.6.

How to Replicate Shared Files Across Several Servers

The best way to share replicated file systems that are read-only is to use failover. See Client-Side Failover for a discussion of failover.

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Modify the entry in the autofs maps.

    Create the list of all replica servers as a comma-separated list, such as the following:


    bin     aa,bb,cc,dd:/export/local/bin/$CPU
    

    Autofs chooses the nearest server. If a server has several network interfaces, list each interface. Autofs chooses the nearest interface to the client, avoiding unnecessary routing of NFS traffic.

How to Apply Autofs Security Restrictions

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Create the following entry in the name service auto_master file, either NIS or NIS+:


    /home     auto_home     -nosuid
    

    The nosuid option prevents users from creating files with the setuid or setgid bit set.

    This entry overrides the entry for /home in a generic local /etc/auto_master file (see the previous example). The override happens because the +auto_master reference to the external name service map occurs before the /home entry in the file. If the entries in the auto_home map include mount options, the nosuid option is overwritten. Therefore, either no options should be used in the auto_home map or the nosuid option must be included with each entry.


    Note –

    Do not mount the home directory disk partitions on or under /home on the server.


How to Use a Public File Handle With Autofs

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Create an entry in the autofs map such as the following:


    /usr/local     -ro,public    bee:/export/share/local

    The public option forces the public handle to be used. If the NFS server does not support a public file handle, the mount fails.

How to Use NFS URLs With Autofs

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Create an autofs entry such as the following:


    /usr/local     -ro    nfs://bee/export/share/local

    The service tries to use the public file handle on the NFS server, but if the server does not support a public file handle, the MOUNT protocol is used.

Disabling Autofs Browsability

Starting with the Solaris 2.6 release, the default version of /etc/auto_master that is installed has the -nobrowse option added to the entries for /home and /net. In addition, the upgrade procedure adds the -nobrowse option to the /home and /net entries in /etc/auto_master if these entries have not been modified. However, you might have to make these changes manually or to turn off browsability for site-specific autofs mount points after the installation.

You can turn off the browsability feature in several ways. Disable the feature by using a command-line option to the automountd daemon, which completely disables autofs browsability for the client. Or disable browsability for each map entry on all clients by using the autofs maps in either a NIS or NIS+ name space. You can also disable the feature for each map entry on each client, using local autofs maps if no network-wide name space is being used.

How to Completely Disable Autofs Browsability on a Single NFS Client

  1. Become superuser or assume an equivalent role on the NFS client.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Add the -n option to the startup script.

    As root, edit the /etc/init.d/autofs script and add the -n option to the line that starts the autmountd daemon:.


    /usr/lib/autofs/automountd -n \
                    < /dev/null > /dev/console 2>&1  # start daemon
  3. Restart the autofs service.


    # /etc/init.d/autofs stop
    # /etc/init.d/autofs start
    

How to Disable Autofs Browsability for All Clients

To disable browsability for all clients, you must employ a name service such as NIS or NIS+. Otherwise, you need to manually edit the automounter maps on each client. In this example, the browsability of the /home directory is disabled. You must follow this procedure for each indirect autofs node that needs to be disabled.

  1. Add the -nobrowse option to the /home entry in the name service auto_master file.


    /home     auto_home     -nobrowse 
    
  2. On all clients: run the automount command.

    The new behavior becomes effective after you run the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

How to Disable Autofs Browsability on a Selected File System

In this example, browsability of the /net directory is disabled. You can use the same procedure for /home or any other autofs mount points.

  1. Check the automount entry in /etc/nsswitch.conf.

    For local file entries to have precedence, the entry in the name service switch file should list files before the name service. For example:


    automount:  files nisplus

    This is the default configuration in a standard Solaris installation.

  2. Check the position of the +auto_master entry in /etc/auto_master.

    For additions to the local files to have precedence over the entries in the name space, the +auto_master entry must be moved below /net:


    # Master map for automounter
    #
    /net    -hosts     -nosuid
    /home   auto_home
    /xfn    -xfn
    +auto_master
    

    A standard configuration places the +auto_master entry at the top of the file. This placement prevents any local changes from being used.

  3. Add the nobrowse option to the /net entry in the /etc/auto_master file.


    /net     -hosts     -nosuid,nobrowse 
    
  4. On all clients: run the automount command.

    The new behavior becomes effective after running the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

Strategies for NFS Troubleshooting

When tracking down an NFS problem, remember the main points of possible failure: the server, the client, and the network. The strategy that is outlined in this section tries to isolate each individual component to find the one that is not working. In all situations, the mountd and nfsd daemons must be running on the server in order for remote mounts to succeed.


Note –

The mountd and nfsd daemons start automatically at boot time only if NFS share entries are in the /etc/dfs/dfstab file. Therefore, you must start mountd and nfsd manually when you set up sharing for the first time.


The -intr option is set by default for all mounts. If a program hangs with a “server not responding” message, you can kill the program with the keyboard interrupt Control-c.

When the network or server has problems, programs that access hard-mounted remote files fail differently than those programs that access soft-mounted remote files. Hard-mounted remote file systems cause the client's kernel to retry the requests until the server responds again. Soft-mounted remote file systems cause the client's system calls to return an error after trying for awhile. Because these errors can result in unexpected application errors and data corruption, avoid soft mounting.

When a file system is hard mounted, a program that tries to access it hangs if the server fails to respond. In this situation, the NFS system displays the following message on the console:


NFS server hostname not responding still trying

When the server finally responds, the following message appears on the console:


NFS server hostname ok

A program that accesses a soft-mounted file system whose server is not responding generates the following message:


NFS operation failed for server hostname: error # (error_message)

Note –

Because of possible errors, do not soft-mount file systems with read-write data or file systems from which executables are run. Writable data could be corrupted if the application ignores the errors. Mounted executables might not load properly and can fail.


NFS Troubleshooting Procedures

To determine where the NFS service has failed, you need to follow several procedures to isolate the failure. Check for the following items:

In the process of checking these items, it might become apparent that other portions of the network are not functioning, such as the name service or the physical network hardware. The System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) contains debugging procedures for several name services. Also, during the process it might become obvious that the problem isn't at the client end. An example is if you get at least one trouble call from every subnet in your work area. In this situation, it is much more expedient to assume that the problem is the server or the network hardware near the server, and start the debugging process at the server, not at the client.

How to Check Connectivity on an NFS Client

  1. Check that the NFS server is reachable from the client. On the client, type the following command.


    % /usr/sbin/ping bee
    bee is alive

    If the command reports that the server is alive, remotely check the NFS server. See How to Check the NFS Server Remotely.

  2. If the server is not reachable from the client, ensure that the local name service is running.

    For NIS+ clients, type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  3. If the name service is running, ensure that the client has received the correct host information by typing the following:


    % /usr/bin/getent hosts bee
    129.144.83.117	bee.eng.acme.com
  4. If the host information is correct, but the server is not reachable from the client, run the ping command from another client.

    If the command run from a second client fails, see How to Verify the NFS Service on the Server.

  5. If the server is reachable from the second client, use ping to check connectivity of the first client to other systems on the local net.

    If this command fails, check the networking software configuration on the client (/etc/netmasks, /etc/nsswitch.conf, and so forth).

  6. If the software is correct, check the networking hardware.

    Try moving the client onto a second net drop.

How to Check the NFS Server Remotely

  1. Check that the NFS services have started on the NFS server by typing the following command:


    % rpcinfo -s bee|egrep 'nfs|mountd'
     100003  3,2    tcp,udp,tcp6,upd6                nfs     superuser
     100005  3,2,1  ticots,ticotsord,tcp,tcp6,ticlts,udp,upd6  mountd  superuser

    If the daemons have not been started, see How to Restart NFS Services.

  2. Check that the server's nfsd processes are responding.

    On the client, type the following command to test the UDP NFS connections from the server.


    % /usr/bin/rpcinfo -u bee nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers. Using the -t option tests the TCP connection. If this command fails, proceed to How to Verify the NFS Service on the Server.

  3. Check that the server's mountd is responding, by typing the following command.


    % /usr/bin/rpcinfo -u bee mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Using the -t option tests the TCP connection. If either attempt fails, proceed to How to Verify the NFS Service on the Server.

  4. Check the local autofs service if it is being used:


    % cd /net/wasp
    

    Choose a /net or /home mount point that you know should work properly. If this command fails, then as root on the client, type the following to restart the autofs service:


    # /etc/init.d/autofs stop
    # /etc/init.d/autofs start
    
  5. Verify that file system is shared as expected on the server.


    % /usr/sbin/showmount -e bee
    /usr/src										eng
    /export/share/man						(everyone)

    Check the entry on the server and the local mount entry for errors. Also check the name space. In this instance, if the first client is not in the eng netgroup, that client would not be able to mount the /usr/src file system.

    Check all entries that include mounting information in all of the local files. The list includes /etc/vfstab and all the /etc/auto_* files.

How to Verify the NFS Service on the Server

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Check that the server can reach the clients.


    # ping lilac
    lilac is alive
  3. If the client is not reachable from the server, ensure that the local name service is running. For NIS+ clients, type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  4. If the name service is running, check the networking software configuration on the server (/etc/netmasks, /etc/nsswitch.conf, and so forth).

  5. Type the following command to check whether the nfsd daemon is running.


    # rpcinfo -u localhost nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    # ps -ef | grep nfsd
    root    232      1  0  Apr 07     ?     0:01 /usr/lib/nfs/nfsd -a 16
    root   3127   2462  1  09:32:57  pts/3  0:00 grep nfsd

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.

  6. Type the following command to check whether the mountd daemon is running.


    # /usr/bin/rpcinfo -u localhost mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting
    # ps -ef | grep mountd
    root    145      1 0 Apr 07  ?     21:57 /usr/lib/autofs/automountd
    root    234      1 0 Apr 07  ?     0:04  /usr/lib/nfs/mountd
    root   3084 2462 1 09:30:20 pts/3  0:00  grep mountd

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.

  7. Type the following command to check whether the rpcbind daemon is running.


    # /usr/bin/rpcinfo -u localhost rpcbind
    program 100000 version 1 ready and waiting
    program 100000 version 2 ready and waiting
    program 100000 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. If rpcbind seems to be hung, either reboot the server or follow the steps in How to Warm-Start rpcbind.

How to Restart NFS Services

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. To enable daemons without rebooting, type the following commands.


# /etc/init.d/nfs.server stop
# /etc/init.d/nfs.server start

This remedy stops the daemons and restarts them, if an entry is in /etc/dfs/dfstab.

How to Warm-Start rpcbind

If the NFS server cannot be rebooted because of work in progress, you can restart rpcbind without having to restart all of the services that use RPC. Just complete a warm start as described in this procedure.

  1. Become superuser or assume an equivalent role.

    For information about roles, see “Using Privileged Applications” in System Administration Guide: Security Services.

  2. Determine the PID for rpcbind.

    Run ps to get the PID, which is the value in the second column.


    # ps -ef |grep rpcbind
        root   115     1  0   May 31 ?        0:14 /usr/sbin/rpcbind
        root 13000  6944  0 11:11:15 pts/3    0:00 grep rpcbind
  3. Send a SIGTERM signal to the rpcbind process.

    In this example, term is the signal that is to be sent and 115 is the PID for the program (see the kill(1) man page). This command causes rpcbind to create a list of the current registered services in /tmp/portmap.file and /tmp/rpcbind.file.


    # kill -s term 115
    

    Note –

    If you do not kill the rpcbind process with the -s term option, you cannot complete a warm start of rpcbind and must reboot the server to restore service.


  4. Restart rpcbind.

    Warm-restart the command so that the files that were created by the kill command are consulted, and the process resumes without requiring a restart of all of the RPC services. See the rpcbind(1M) man page.


    # /usr/sbin/rpcbind -w
    

Identifying Which Host Is Providing NFS File Service

Run the nfsstat command with the -m option to gather current NFS information. The name of the current server is printed after “currserver=”.


% nfsstat -m
/usr/local from bee,wasp:/export/share/local
 Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,synlink,
		acl,rsize=32768,wsize=32678,retrans=5
 Failover: noresponse=0, failover=0, remap=0, currserver=bee

How to Verify Options Used With the mount Command

In the Solaris 2.6 release and in any versions of the mount command that were patched after the 2.6 release, no warning is issued for invalid options. The following procedure helps determine whether the options that were supplied either on the command line or through /etc/vfstab were valid.

For this example, assume that the following command has been run:


# mount -F nfs -o ro,vers=2 bee:/export/share/local /mnt
  1. Verify the options by running the following command.


    % nfsstat -m
    /mnt from bee:/export/share/local
    Flags:  vers=2,proto=tcp,sec=sys,hard,intr,dynamic,acl,rsize=8192,wsize=8192,
            retrans=5

    The file system from bee has been mounted with the protocol version set to 2. Unfortunately, the nfsstat command does not display information about all of the options, but using the nfsstat command is the most accurate way to verify the options.

  2. Check the entry in /etc/mnttab.

    The mount command does not allow invalid options to be added to the mount table. Therefore, verify that the options that are listed in the file match those options that are listed on the command line. In this way, you can check those options that are not reported by the nfsstat command.


    # grep bee /etc/mnttab
    bee:/export/share/local /mnt nfs	ro,vers=2,dev=2b0005e 859934818

Troubleshooting Autofs

Occasionally, you might encounter problems with autofs. This section should improve the problem-solving process. The section is divided into two subsections.

This section presents a list of the error messages that autofs generates. The list is divided into two parts:

Each error message is followed by a description and probable cause of the message.

When troubleshooting, start the autofs programs with the verbose (-v) option. Otherwise, you might experience problems without knowing why.

The following paragraphs are labeled with the error message you are likely to see if autofs fails, and a description of the possible problem.

Error Messages Generated by automount -v


bad key key in direct map mapname

While scanning a direct map, autofs has found an entry key without a prefixed /. Keys in direct maps must be full path names.


bad key key in indirect map mapname

While scanning an indirect map, autofs has found an entry key containing a /. Indirect map keys must be simple names—not path names.


can't mount server:pathname: reason

The mount daemon on the server refuses to provide a file handle for server:pathname. Check the export table on the server.


couldn't create mount point mountpoint: reason

Autofs was unable to create a mount point that was required for a mount. This problem most frequently occurs when you attempt to hierarchically mount all of a server's exported file systems. A required mount point can exist only in a file system that cannot be mounted (it cannot be exported). The mount point cannot be created because the exported parent file system is exported read-only.


leading space in map entry entry text in mapname

Autofs has discovered an entry in an automount map that contains leading spaces. This problem is usually an indication of an improperly continued map entry. For example:


fake
/blat   		frobz:/usr/frotz 

In this example, the warning is generated when autofs encounters the second line because the first line should be terminated with a backslash (\).


mapname: Not found

The required map cannot be located. This message is produced only when the -v option is used. Check the spelling and path name of the map name.


remount server:pathname on mountpoint: server not responding

Autofs has failed to remount a file system it previously unmounted.


WARNING: mountpoint already mounted on

Autofs is attempting to mount over an existing mount point. This message means an internal error occurred in autofs (an anomaly).

Miscellaneous Error Messages


dir mountpoint must start with '/'

The automounter mount point must be given as a full path name. Check the spelling and path name of the mount point.


hierarchical mountpoints: pathname1 and pathname2

Autofs does not allow its mount points to have a hierarchical relationship. An autofs mount point must not be contained within another automounted file system.


host server not responding

Autofs attempted to contact server, but received no response.


hostname: exports: rpc_err

An error occurred while getting the export list from hostname. This message indicates a server or network problem.


map mapname, key key: bad

The map entry is malformed, and autofs cannot interpret it. Recheck the entry. Perhaps the entry has characters that need escaping.


mapname: nis_err

An error occurred when looking up an entry in a NIS map. This message can indicate NIS problems.


mount of server:pathname on mountpoint:reason

Autofs failed to do a mount. This can indicate a server or network problem.


mountpoint: Not a directory

Autofs cannot mount itself on mountpoint because it is not a directory. Check the spelling and path name of the mount point.


nfscast: cannot send packet: reason

Autofs cannot send a query packet to a server in a list of replicated file system locations.


nfscast: cannot receive reply: reason

Autofs cannot receive replies from any of the servers in a list of replicated file system locations.


nfscast: select: reason

All these error messages indicate problems in attempting to ping servers for a replicated file system. This message can indicate a network problem.


pathconf: no info for server:pathname

Autofs failed to get pathconf information for the path name (see the fpathconf(2) man page).


pathconf: server: server not responding

Autofs is unable to contact the mount daemon on server that provides the information to pathconf().

Other Errors With Autofs

If the /etc/auto* files have the execute bit set, the automounter tries to execute the maps, which creates messages such as the following :

/etc/auto_home: +auto_home: not found

In this situation, the auto_home file has incorrect permissions. Each entry in the file generates an error message similar to this one. The permissions to the file should be reset by typing the following command:


# chmod 644 /etc/auto_home

NFS Error Messages

This section shows an error message that is followed by a description of the conditions that should create the error and at minimum one remedy.



Bad argument specified with index option - must be a file

You must include a file name with the index option. You cannot use directory names.


Cannot establish NFS service over /dev/tcp: transport setup problem

This message is often created when the services information in the name space has not been updated. The message can also be reported for UDP. To fix this problem, you must update the services data in the name space. For NIS+ the entries should be as follows:


nfsd nfsd tcp 2049 NFS server daemon
nfsd nfsd udp 2049 NFS server daemon

For NIS and /etc/services, the entries should be as follows:


nfsd    2049/tcp    nfs    # NFS server daemon
nfsd    2049/udp    nfs    # NFS server daemon

Cannot use index option without public option

Include the public option with the index option in the share command. You must define the public file handle in order for the index option to work.


Note –

The Solaris 2.5.1 release required that the public file handle be set by using the share command. A change in the Solaris 2.6 release sets the public file handle to be / by default. This error message is no longer relevant.



Could not start daemon: error

This message is displayed if the daemon terminates abnormally or if a system call error occurs. The error string defines the problem.


Could not use public filehandle in request to server

This message is displayed if the public option is specified but the NFS server does not support the public file handle. In this situation, the mount fails. To remedy this situation, either try the mount request without using the public file handle or reconfigure the NFS server to support the public file handle.


daemon running already with pid pid

The daemon is already running. If you want to run a new copy, kill the current version, and start a new version.


error locking lock file

This message is displayed when the lock file that is associated with a daemon cannot be locked properly.


error checking lock file: error

This message is displayed when the lock file that is associated with a daemon cannot be opened properly.


NOTICE: NFS3: failing over from host1 to host2

This message is displayed on the console when a failover occurs. It is an advisory message only.


filename: File too large

An NFS version 2 client is trying to access a file that is over 2 Gbytes.


mount: ... server not responding:RPC_PMAP_FAILURE - RPC_TIMED_OUT

The server that is sharing the file system you are trying to mount is down or unreachable, at the wrong run level, or its rpcbind is dead or hung.


mount: ... server not responding: RPC_PROG_NOT_REGISTERED

The mount request registered with rpcbind, but the NFS mount daemon mountd is not registered.


mount: ... No such file or directory

Either the remote directory or the local directory does not exist. Check the spelling of the directory names. Run ls on both directories.


mount: ...: Permission denied

Your computer name might not be in the list of clients or netgroup that is allowed access to the file system you tried to mount. Use showmount -e to verify the access list.


nfs mount: ignoring invalid option "-option"

The -option flag is not valid. Refer to the mount_nfs(1M) man page to verify the required syntax.


Note –

This error message is not displayed when running any version of the mount command that is included in a Solaris release from 2.6 to the current release or in earlier versions that have been patched.



nfs mount: NFS can't support "nolargefiles"

An NFS client has attempted to mount a file system from an NFS server by using the -nolargefiles option. This option is not supported for NFS file system types.


nfs mount: NFS V2 can't support "largefiles"

The NFS version 2 protocol cannot handle large files. You must use version 3 if access to large files is required.


NFS server hostname not responding still trying

If programs hang while doing file-related work, your NFS server might be dead. This message indicates that NFS server hostname is down or that a problem has occurred with the server or the network. If failover is being used, hostname is a list of servers. Start troubleshooting with How to Check Connectivity on an NFS Client.


NFS fsstat failed for server hostname: RPC: Authentication error

This error can be caused by many situations. One of the most difficult situations to debug is when this problem occurs because a user is in too many groups. Currently, a user can be in as many as 16 groups but no more if the user is accessing files through NFS mounts. An alternate does exist for users who need to be in more than 16 groups. You can use access control lists to provide the needed access privileges, if you run at minimum the Solaris 2.5 release on the NFS server and the NFS clients.


port number in nfs URL not the same as port number in port option

The port number that is included in the NFS URL must match the port number included with the -port option to mount. If the port numbers do not match, the mount fails. Either change the command to make the port numbers identical or do not specify the port number that is incorrect. Usually, you do not need to specify the port number in the NFS URL and with the -port option.


replicas must have the same version

For NFS failover to function properly, the NFS servers that are replicas must support the same version of the NFS protocol. Mixing version 2 and version 3 servers is not allowed.


replicated mounts must be read-only

NFS failover does not work on file systems that are mounted read-write. Mounting the file system read-write increases the likelihood that a file will change. NFS failover depends on the file systems being identical.


replicated mounts must not be soft

Replicated mounts require that you wait for a timeout before failover occurs. The soft option requires that the mount fail immediately when a timeout starts, so you cannot include the -soft option with a replicated mount.


share_nfs: Cannot share more than one filesystem with 'public' option

Check that the /etc/dfs/dfstab file has only one file system selected to be shared with the -public option. Only one public file handle can be established per server, so only one file system per server can be shared with this option.


WARNING: No network locking on hostname:path: contact admin to install server change

An NFS client has unsuccessfully attempted to establish a connection with the network lock manager on an NFS server. Rather than fail the mount, this warning is generated to warn you that locking does not work.