System Administration Guide: Network Services

Chapter 5 Network File System Administration (Tasks)

This chapter provides information about how to perform such NFS administration tasks as setting up NFS services, adding new file systems to share, and mounting file systems. The chapter also covers the use of the Secure NFS system and the use of WebNFS functionality. The last part of the chapter includes troubleshooting procedures and a list of some of the NFS error messages and their meanings.

Your responsibilities as an NFS administrator depend on your site's requirements and the role of your computer on the network. You might be responsible for all the computers on your local network, in which instance you might be responsible for determining these configuration items:

Maintaining a server after it has been set up involves the following tasks:

Remember, a computer can be both a server and a client. So, a computer can be used to share local file systems with remote computers and to mount remote file systems.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Oracle Solaris Containers-Resource Management and Oracle Solaris Zones for more information.


Automatic File System Sharing

Servers provide access to their file systems by sharing the file systems over the NFS environment. You specify which file systems are to be shared with the share command or with the /etc/dfs/dfstab file.

Entries in the /etc/dfs/dfstab file are shared automatically whenever you start NFS server operation. You should set up automatic sharing if you need to share the same set of file systems on a regular basis. For example, if your computer is a server that supports home directories, you need to make the home directories available at all times. Most file system sharing should be done automatically. The only time that manual sharing should occur is during testing or troubleshooting.

The dfstab file lists all the file systems that your server shares with its clients. This file also controls which clients can mount a file system. You can modify dfstab to add or delete a file system or change the way sharing occurs. Just edit the file with any text editor that is supported (such as vi). The next time that the computer enters run level 3, the system reads the updated dfstab to determine which file systems should be shared automatically.

Each line in the dfstab file consists of a share command, the same command that you type at the command-line prompt to share the file system. The share command is located in /usr/sbin.

Table 5–1 File-System Sharing Task Map

Task 

Description 

For Instructions 

Establish automatic file system sharing 

Steps to configure a server so that file systems are automatically shared when the server is rebooted 

How to Set Up Automatic File-System Sharing

Enable WebNFS 

Steps to configure a server so that users can access files by using WebNFS 

How to Enable WebNFS Access

Enable NFS server logging 

Steps to configure a server so that NFS logging is run on selected file systems 

How to Enable NFS Server Logging

ProcedureHow to Set Up Automatic File-System Sharing

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Add entries for each file system to be shared.

    Edit /etc/dfs/dfstab. Add one entry to the file for every file system that you want to be automatically shared. Each entry must be on a line by itself in the file and use this syntax:


    share [-F nfs] [-o specific-options] [-d description] pathname

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  3. Share the file system.

    After the entry is in /etc/dfs/dfstab, you can share the file system by either rebooting the system or by using the shareall command.


    # shareall
    
  4. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public  ""
See Also

The next step is to set up your autofs maps so that clients can access the file systems that you have shared on the server. See Task Overview for Autofs Administration.

ProcedureHow to Enable WebNFS Access

Starting with the Solaris 2.6 release, by default all file systems that are available for NFS mounting are automatically available for WebNFS access. The only condition that requires the use of this procedure is one of the following:

See Planning for WebNFS Access for a list of issues to consider before starting the WebNFS service.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Add entries for each file system to be shared by using the WebNFS service.

    Edit /etc/dfs/dfstab. Add one entry to the file for every file system. The public and index tags that are shown in the following example are optional.


    share -F nfs -o ro,public,index=index.html /export/ftp

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  3. Share the file system.

    After the entry is in /etc/dfs/dfstab, you can share the file system by either rebooting the system or by using the shareall command.


    # shareall
    
  4. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public,index=index.html  ""

ProcedureHow to Enable NFS Server Logging

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. (Optional) Change file system configuration settings.

    In /etc/nfs/nfslog.conf, you can change the settings in one of two ways. You can edit the default settings for all file systems by changing the data that is associated with the global tag. Alternately, you can add a new tag for this file system. If these changes are not needed, you do not need to change this file. The format of /etc/nfs/nfslog.conf is described in nfslog.conf(4).

  3. Add entries for each file system to be shared by using NFS server logging.

    Edit /etc/dfs/dfstab. Add one entry to the file for the file system on which you are enabling NFS server logging. The tag that is used with the log=tag option must be entered in /etc/nfs/nfslog.conf. This example uses the default settings in the global tag.


    share -F nfs -o ro,log=global /export/ftp

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  4. Share the file system.

    After the entry is in /etc/dfs/dfstab, you can share the file system by either rebooting the system or by using the shareall command.


    # shareall
    
  5. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,log=global  ""
  6. Check if nfslogd, the NFS log daemon, is running.


    # ps -ef | grep nfslogd
    
  7. (Optional) Start nfslogd, if it is not running already.

    • (Optional) If /etc/nfs/nfslogtab is present, start the NFS log daemon by typing the following:


      # svcadm restart network/nfs/server:default
      
    • (Optional) If /etc/nfs/nfslogtab is not present, run any of the share commands to create the file and then start the daemon.


      # shareall
      # svcadm restart network/nfs/server:default
      

Mounting File Systems

You can mount file systems in several ways. File systems can be mounted automatically when the system is booted, on demand from the command line, or through the automounter. The automounter provides many advantages to mounting at boot time or mounting from the command line. However, many situations require a combination of all three methods. Additionally, several ways of enabling or disabling processes exist, depending on the options you use when mounting the file system. See the following table for a complete list of the tasks that are associated with file system mounting.

Table 5–2 Task Map for Mounting File Systems

Task 

Description 

For Instructions 

Mount a file system at boot time 

Steps so that a file system is mounted whenever a system is rebooted. 

How to Mount a File System at Boot Time.

Mount a file system by using a command 

Steps to mount a file system when a system is running. This procedure is useful when testing. 

How to Mount a File System From the Command Line.

Mount with the automounter 

Steps to access a file system on demand without using the command line. 

Mounting With the Automounter.

Prevent large files 

Steps to prevent large files from being created on a file system. 

How to Disable Large Files on an NFS Server.

Start client-side failover 

Steps to enable the automatic switchover to a working file system if a server fails. 

How to Use Client-Side Failover.

Disable mount access for a client 

Steps to disable the ability of one client to access a remote file system. 

How to Disable Mount Access for One Client.

Provide access to a file system through a firewall 

Steps to allow access to a file system through a firewall by using the WebNFS protocol. 

How to Mount an NFS File System Through a Firewall.

Mount a file system by using an NFS URL 

Steps to allow access to a file system by using an NFS URL. This process allows for file system access without using the MOUNT protocol. 

How to Mount an NFS File System Using an NFS URL.

ProcedureHow to Mount a File System at Boot Time

If you want to mount file systems at boot time instead of using autofs maps, follow this procedure. This procedure must be completed on every client that should have access to remote file systems.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Add an entry for the file system to /etc/vfstab.

    Entries in the /etc/vfstab file have the following syntax:

    special  fsckdev  mountp  fstype  fsckpass  mount-at-boot  mntopts

    See the vfstab(4) man page for more information.


    Caution – Caution –

    NFS servers that also have NFS client vfstab entries must always specify the bg option to avoid a system hang during reboot. For more information, see mount Options for NFS File Systems.



Example 5–1 Entry in the Client's vfstab File

You want a client machine to mount the /var/mail directory from the server wasp. You want the file system to be mounted as /var/mail on the client and you want the client to have read-write access. Add the following entry to the client's vfstab file.


wasp:/var/mail - /var/mail nfs - yes rw

ProcedureHow to Mount a File System From the Command Line

Mounting a file system from the command line is often performed to test a new mount point. This type of mount allows for temporary access to a file system that is not available through the automounter.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Mount the file system.

    Type the following command:


    # mount -F nfs -o ro bee:/export/share/local /mnt
    

    In this instance, the /export/share/local file system from the server bee is mounted on read-only /mnt on the local system. Mounting from the command line allows for temporary viewing of the file system. You can unmount the file system with umount or by rebooting the local host.


    Caution – Caution –

    Starting with the Solaris 2.6 release, all versions of the mount command do not warn about invalid options. The command silently ignores any options that cannot be interpreted. To prevent unexpected behavior, ensure that you verify all of the options that were used.


Mounting With the Automounter

Task Overview for Autofs Administration includes the specific instructions for establishing and supporting mounts with the automounter. Without any changes to the generic system, clients should be able to access remote file systems through the /net mount point. To mount the /export/share/local file system from the previous example, type the following:


% cd /net/bee/export/share/local

Because the automounter allows all users to mount file systems, root access is not required. The automounter also provides for automatic unmounting of file systems, so you do not need to unmount file systems after you are finished.

ProcedureHow to Disable Large Files on an NFS Server

For servers that are supporting clients that cannot handle a file over 2 GBytes, you might need to disable the ability to create large files.


Note –

Versions prior to the 2.6 release of the Solaris release cannot use large files. If the clients need to access large files, check that the clients of the NFS server are running, at minimum, the 2.6 release.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Check that no large files exist on the file system.

    For example:


    # cd /export/home1
    # find . -xdev -size +2000000 -exec ls -l {} \;
    

    If large files are on the file system, you must remove or move these files to another file system.

  3. Unmount the file system.


    # umount /export/home1
    
  4. Reset the file system state if the file system has been mounted by using largefiles.

    fsck resets the file system state if no large files exist on the file system:


    # fsck /export/home1
    
  5. Mount the file system by using nolargefiles.


    # mount -F ufs -o nolargefiles /export/home1
    

    You can mount from the command line, but to make the option more permanent, add an entry that resembles the following into /etc/vfstab:


    /dev/dsk/c0t3d0s1 /dev/rdsk/c0t3d0s1 /export/home1  ufs  2  yes  nolargefiles

ProcedureHow to Use Client-Side Failover

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. On the NFS client, mount the file system by using the ro option.

    You can mount from the command line, through the automounter, or by adding an entry to /etc/vfstab that resembles the following:


    bee,wasp:/export/share/local  -  /usr/local  nfs  -  no  ro

    This syntax has been allowed by the automounter. However, the failover was not available while file systems were mounted, only when a server was being selected.


    Note –

    Servers that are running different versions of the NFS protocol cannot be mixed by using a command line or in a vfstab entry. Mixing servers that support NFS version 2, version 3, or version 4 protocols can only be performed with autofs. In autofs, the best subset of version 2, version 3, or version 4 servers is used.


ProcedureHow to Disable Mount Access for One Client

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Add an entry in /etc/dfs/dfstab.

    The first example allows mount access to all clients in the eng netgroup except the host that is named rose. The second example allows mount access to all clients in the eng.example.com DNS domain except for rose.


    share -F nfs -o ro=-rose:eng /export/share/man
    share -F nfs -o ro=-rose:.eng.example.com /export/share/man

    For additional information about access lists, see Setting Access Lists With the share Command. For a description of /etc/dfs/dfstab, see dfstab(4).

  3. Share the file system.

    The NFS server does not use changes to /etc/dfs/dfstab until the file systems are shared again or until the server is rebooted.


    # shareall

ProcedureHow to Mount an NFS File System Through a Firewall

To access file systems through a firewall, use the following procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Manually mount the file system by using a command such as the following:


    # mount -F nfs bee:/export/share/local /mnt
    

    In this example, the file system /export/share/local is mounted on the local client by using the public file handle. An NFS URL can be used instead of the standard path name. If the public file handle is not supported by the server bee, the mount operation fails.


    Note –

    This procedure requires that the file system on the NFS server be shared by using the public option. Additionally, any firewalls between the client and the server must allow TCP connections on port 2049. Starting with the Solaris 2.6 release, all file systems that are shared allow for public file handle access, so the public option is applied by default.


ProcedureHow to Mount an NFS File System Using an NFS URL

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. (Optional) If you are using NFS version 2 or version 3, manually mount the file system by using a command such as the following:


    # mount -F nfs nfs://bee:3000/export/share/local /mnt
    

    In this example, the /export/share/local file system is being mounted from the server bee by using NFS port number 3000. The port number is not required and by default the standard NFS port number of 2049 is used. You can choose to include the public option with an NFS URL. Without the public option, the MOUNT protocol is used if the public file handle is not supported by the server. The public option forces the use of the public file handle, and the mount fails if the public file handle is not supported.

  3. (Optional) If you are using NFS version 4, manually mount the file system by using a command such as the following:


    # mount -F nfs -o vers=4 nfs://bee:3000/export/share/local /mnt
    

Setting Up NFS Services

This section describes some of the tasks that are necessary to do the following:


Note –

Starting in the Solaris 10 release, NFS version 4 is the default.


Table 5–3 Task Map for NFS Services

Task 

Description 

For Instructions 

Start the NFS server 

Steps to start the NFS service if it has not been started automatically. 

How to Start the NFS Services

Stop the NFS server 

Steps to stop the NFS service. Normally the service should not need to be stopped. 

How to Stop the NFS Services

Start the automounter 

Steps to start the automounter. This procedure is required when some of the automounter maps are changed. 

How to Start the Automounter

Stop the automounter 

Steps to stop the automounter. This procedure is required when some of the automounter maps are changed. 

How to Stop the Automounter

Select a different version of NFS on the server 

Steps to select a different version of NFS on the server. If you choose not to use NFS version 4, use this procedure. 

How to Select Different Versions of NFS on a Server

Select a different version of NFS on the client 

Steps to select a different version of NFS on the client by modifying the /etc/default/nfs file. If you choose not to use NFS version 4, use this procedure.

How to Select Different Versions of NFS on a Client by Modifying the /etc/default/nfs File

 

Alternate steps to select a different version of NFS on the client by using the command line. If you choose not to use NFS version 4, use this alternate procedure. 

How to Use the Command Line to Select Different Versions of NFS on a Client

ProcedureHow to Start the NFS Services

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Enable the NFS service on the server.

    Type the following command.


    # svcadm enable network/nfs/server
    

    This command enables the NFS service.


    Note –

    Starting with the Solaris 9 release, the NFS server starts automatically when you boot the system. Additionally, any time after the system has been booted, the NFS service daemons can be automatically enabled by sharing the NFS file system. See How to Set Up Automatic File-System Sharing.


ProcedureHow to Stop the NFS Services

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Disable the NFS service on the server.

    Type the following command.


    # svcadm disable network/nfs/server
    

ProcedureHow to Start the Automounter

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Enable the autofs daemon.

    Type the following command:


    # svcadm enable system/filesystem/autofs
    

ProcedureHow to Stop the Automounter

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Disable the autofs daemon.

    Type the following command:


    # svcadm disable system/filesystem/autofs
    

ProcedureHow to Select Different Versions of NFS on a Server

If you choose not to use NFS version 4, use this procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Edit the /etc/default/nfs file.

    For example, if you want the server to provide only version 3, set the values for both NFS_SERVER_VERSMAX and NFS_SERVER_VERSMIN to 3. For a list of keywords and their values, refer to Keywords for the /etc/default/nfs File.


    NFS_SERVER_VERSMAX=value
    NFS_SERVER_VERSMIN=value
    
    value

    Provide the version number.


    Note –

    By default, these lines are commented. Remember to remove the pound (#) sign, also.


  3. Change SMF parameters to set the NFS version numbers.

    For example, if you want the server to provide only version 3, set the values for both the server_vermax and server_versmin to 3 as shown below:


    # sharectl set -p server_vermax=3 nfs
    # sharectl set -p server_vermin=3 nfs
    
  4. (Optional) If you want to disable server delegation, include this line in the /etc/default/nfs file.


    NFS_SERVER_DELEGATION=off
    

    Note –

    In NFS version 4, server delegation is enabled by default. For more information, see Delegation in NFS Version 4.


  5. (Optional) If you want to set a common domain for clients and servers, include this line in the /etc/default/nfs file.


    NFSMAPID_DOMAIN=my.comany.com
    
    my.comany.com

    Provide the common domain

    For more information, refer to nfsmapid Daemon.

  6. Check if the NFS service is running on the server.

    Type the following command:


    # svcs network/nfs/server
    

    This command reports whether the NFS server service is online or disabled.

  7. (Optional) If necessary, disable the NFS service.

    If you discovered from the previous step that the NFS service is online, type the following command to disable the service.


    # svcadm disable network/nfs/server
    

    Note –

    If you need to configure your NFS service, refer to How to Set Up Automatic File-System Sharing.


  8. Enable the NFS service.

    Type the following command to enable the service.


    # svcadm enable network/nfs/server
    
See Also

Version Negotiation in NFS

ProcedureHow to Select Different Versions of NFS on a Client by Modifying the /etc/default/nfs File

The following procedure shows you how to control which version of NFS is used on the client by modifying the /etc/default/nfs file. If you prefer to use the command line, refer to How to Use the Command Line to Select Different Versions of NFS on a Client.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Edit the /etc/default/nfs file.

    For example, if you want only version 3 on the client, set the values for both NFS_CLIENT_VERSMAX and NFS_CLIENT_VERSMIN to 3. For a list of keywords and their values, refer to Keywords for the /etc/default/nfs File.


    NFS_CLIENT_VERSMAX=value
    NFS_CLIENT_VERSMIN=value
    
    value

    Provide the version number.


    Note –

    By default, these lines are commented. Remember to remove the pound (#) sign, also.


  3. Mount NFS on the client.

    Type the following command:


    # mount server-name:/share-point /local-dir
    
    server-name

    Provide the name of the server.

    /share-point

    Provide the path of the remote directory to be shared.

    /local-dir

    Provide the path of the local mount point.

See Also

Version Negotiation in NFS

ProcedureHow to Use the Command Line to Select Different Versions of NFS on a Client

The following procedure shows you how to use the command line to control which version of NFS is used on a client for a particular mount. If you prefer to modify the /etc/default/nfs file, see How to Select Different Versions of NFS on a Client by Modifying the /etc/default/nfs File.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Mount the desired version of NFS on the client.

    Type the following command:


    # mount -o vers=value server-name:/share-point /local-dir
    
    value

    Provide the version number.

    server-name

    Provide the name of the server.

    /share-point

    Provide the path of the remote directory to be shared.

    /local-dir

    Provide the path of the local mount point.


    Note –

    This command uses the NFS protocol to mount the remote directory and overrides the client settings in the /etc/default/nfs file.


See Also

Version Negotiation in NFS

Administering the Secure NFS System

To use the Secure NFS system, all the computers that you are responsible for must have a domain name. Typically, a domain is an administrative entity of several computers that is part of a larger network. If you are running a name service, you should also establish the name service for the domain. See System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).

Kerberos V5 authentication is supported by the NFS service. Chapter 21, Introduction to the Kerberos Service, in System Administration Guide: Security Services discusses the Kerberos service.

You can also configure the Secure NFS environment to use Diffie-Hellman authentication. Chapter 16, Using Authentication Services (Tasks), in System Administration Guide: Security Services discusses this authentication service.

ProcedureHow to Set Up a Secure NFS Environment With DH Authentication

  1. Assign your domain a domain name, and make the domain name known to each computer in the domain.

    See the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) if you are using NIS+ as your name service.

  2. Establish public keys and secret keys for your clients' users by using the newkey or nisaddcred command. Have each user establish his or her own secure RPC password by using the chkey command.


    Note –

    For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.


    When public keys and secret keys have been generated, the public keys and encrypted secret keys are stored in the publickey database.

  3. Verify that the name service is responding.

    If you are running NIS+, type the following:


    # nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995

    If you are running NIS, verify that the ypbind daemon is running.

  4. Verify that the keyserv daemon of the key server is running.

    Type the following command.


    # ps -ef | grep keyserv
    root    100      1  16    Apr 11 ?        0:00 /usr/sbin/keyserv
    root   2215   2211   5  09:57:28 pts/0    0:00 grep keyserv

    If the daemon is not running, start the key server by typing the following:


    # /usr/sbin/keyserv
    
  5. Decrypt and store the secret key.

    Usually, the login password is identical to the network password. In this situation, keylogin is not required. If the passwords are different, the users have to log in, and then run keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.


    Note –

    You need to run keylogin -r if the root secret key changes or if /etc/.rootkey is lost.


  6. Update mount options for the file system.

    For Diffie-Hellman authentication, edit the /etc/dfs/dfstab file and add the sec=dh option to the appropriate entries.


    share -F nfs -o sec=dh /export/home
    

    See the dfstab(4) man page for a description of /etc/dfs/dfstab.

  7. Update the automounter maps for the file system.

    Edit the auto_master data to include sec=dh as a mount option in the appropriate entries for Diffie-Hellman authentication:


    /home	auto_home	-nosuid,sec=dh

    Note –

    Releases through Solaris 2.5 have a limitation. If a client does not securely mount a shared file system that is secure, users have access as nobody rather than as themselves. For subsequent releases that use version 2, the NFS server refuses access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode is inherited from the NFS server, so clients do not need to specify sec=dh. The users have access to the files as themselves.


    When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you do not establish new keys or change the keys for root. If you do delete /etc/.rootkey, you can always type the following:


    # keylogin -r
    

WebNFS Administration Tasks

This section provides instructions for administering the WebNFS system. Related tasks follow.

Table 5–4 Task Map for WebNFS Administration

Task 

Description 

For Instructions 

Plan for WebNFS 

Issues to consider before enabling the WebNFS service. 

Planning for WebNFS Access

Enable WebNFS 

Steps to enable mounting of an NFS file system by using the WebNFS protocol. 

How to Enable WebNFS Access

Enable WebNFS through a firewall 

Steps to allow access to files through a firewall by using the WebNFS protocol. 

How to Enable WebNFS Access Through a Firewall

Browse by using an NFS URL 

Instructions for using an NFS URL within a web browser. 

How to Browse Using an NFS URL

Use a public file handle with autofs 

Steps to force use of the public file handle when mounting a file system with the automounter. 

How to Use a Public File Handle With Autofs

Use an NFS URL with autofs 

Steps to add an NFS URL to the automounter maps. 

How to Use NFS URLs With Autofs

Provide access to a file system through a firewall 

Steps to allow access to a file system through a firewall by using the WebNFS protocol. 

How to Mount an NFS File System Through a Firewall

Mount a file system by using an NFS URL 

Steps to allow access to a file system by using an NFS URL. This process allows for file system access without using the MOUNT protocol. 

How to Mount an NFS File System Using an NFS URL

Planning for WebNFS Access

To use WebNFS, you first need an application that is capable of running and loading an NFS URL (for example, nfs://server/path). The next step is to choose the file system that can be exported for WebNFS access. If the application is web browsing, often the document root for the web server is used. You need to consider several factors when choosing a file system to export for WebNFS access.

  1. Each server has one public file handle that by default is associated with the server's root file system. The path in an NFS URL is evaluated relative to the directory with which the public file handle is associated. If the path leads to a file or directory within an exported file system, the server provides access. You can use the public option of the share command to associate the public file handle with a specific exported directory. Using this option allows URLs to be relative to the shared file system rather than to the server's root file system. The root file system does not allow web access unless the root file system is shared.

  2. The WebNFS environment enables users who already have mount privileges to access files through a browser. This capability is enabled regardless of whether the file system is exported by using the public option. Because users already have access to these files through the NFS setup, this access should not create any additional security risk. You only need to share a file system by using the public option if users who cannot mount the file system need to use WebNFS access.

  3. File systems that are already open to the public make good candidates for using the public option. Some examples are the top directory in an ftp archive or the main URL directory for a web site.

  4. You can use the index option with the share command to force the loading of an HTML file. Otherwise, you can list the directory when an NFS URL is accessed.

    After a file system is chosen, review the files and set access permissions to restrict viewing of files or directories, as needed. Establish the permissions, as appropriate, for any NFS file system that is being shared. For many sites, 755 permissions for directories and 644 permissions for files provide the correct level of access.

    You need to consider additional factors if both NFS and HTTP URLs are to be used to access one web site. These factors are described in WebNFS Limitations With Web Browser Use.

How to Browse Using an NFS URL

Browsers that are capable of supporting the WebNFS service should provide access to an NFS URL that resembles the following:


nfs://server<:port>/path
server

Name of the file server

port

Port number to use (2049, default value)

path

Path to file, which can be relative to the public file handle or to the root file system


Note –

In most browsers, the URL service type (for example, nfs or http) is remembered from one transaction to the next. The exception occurs when a URL that includes a different service type is loaded. After you use an NFS URL, a reference to an HTTP URL might be loaded. If such a reference is loaded, subsequent pages are loaded by using the HTTP protocol instead of the NFS protocol.


How to Enable WebNFS Access Through a Firewall

You can enable WebNFS access for clients that are not part of the local subnet by configuring the firewall to allow a TCP connection on port 2049. Just allowing access for httpd does not allow NFS URLs to be used.

Task Overview for Autofs Administration

This section describes some of the most common tasks you might encounter in your own environment. Recommended procedures are included for each scenario to help you configure autofs to best meet your clients' needs. To perform the tasks that are discussed in this section, use the Solaris Management Console tools or see the System Administration Guide: Naming and Directory Services (NIS+).


Note –

Starting in the Solaris 10 release, you can also use the /etc/default/autofs file to configure your autofs environment. For task information, refer to Using the /etc/default/autofs File to Configure Your autofs Environment.


Task Map for Autofs Administration

The following table provides a description and a pointer to many of the tasks that are related to autofs.

Table 5–5 Task Map for Autofs Administration

Task 

Description 

For Instructions 

Start autofs 

Start the automount service without having to reboot the system 

How to Start the Automounter

Stop autofs 

Stop the automount service without disabling other network services 

How to Stop the Automounter

Configure your autofs environment by using the /etc/default/autofs file

Assign values to keywords in the /etc/default/autofs file

Using the /etc/default/autofs File to Configure Your autofs Environment

Access file systems by using autofs 

Access file systems by using the automount service 

Mounting With the Automounter

Modify the autofs maps 

Steps to modify the master map, which should be used to list other maps 

How to Modify the Master Map

 

Steps to modify an indirect map, which should be used for most maps 

How to Modify Indirect Maps

 

Steps to modify a direct map, which should be used when a direct association between a mount point on a client and a server is required 

How to Modify Direct Maps

Modify the autofs maps to access non-NFS file systems 

Steps to set up an autofs map with an entry for a CD-ROM application 

How to Access CD-ROM Applications With Autofs

 

Steps to set up an autofs map with an entry for a PC-DOS diskette 

How to Access PC-DOS Data Diskettes With Autofs

 

Steps to use autofs to access a CacheFS file system 

How to Access NFS File Systems by Using CacheFS

Using /home

Example of how to set up a common /home map

Setting Up a Common View of /home

 

Steps to set up a /home map that refers to multiple file systems

How to Set Up /home With Multiple Home Directory File Systems

Using a new autofs mount point 

Steps to set up a project-related autofs map 

How to Consolidate Project-Related Files Under /ws

 

Steps to set up an autofs map that supports different client architectures 

How to Set Up Different Architectures to Access a Shared Namespace

 

Steps to set up an autofs map that supports different operating systems 

How to Support Incompatible Client Operating System Versions

Replicate file systems with autofs 

Provide access to file systems that fail over 

How to Replicate Shared Files Across Several Servers

Using security restrictions with autofs 

Provide access to file systems while restricting remote root access to the files

How to Apply Autofs Security Restrictions

Using a public file handle with autofs 

Force use of the public file handle when mounting a file system 

How to Use a Public File Handle With Autofs

Using an NFS URL with autofs 

Add an NFS URL so that the automounter can use it 

How to Use NFS URLs With Autofs

Disable autofs browsability 

Steps to disable browsability so that autofs mount points are not automatically populated on a single client 

How to Completely Disable Autofs Browsability on a Single NFS Client

 

Steps to disable browsability so that autofs mount points are not automatically populated on all clients 

How to Disable Autofs Browsability for All Clients

 

Steps to disable browsability so that a specific autofs mount point is not automatically populated on a client 

How to Disable Autofs Browsability on a Selected File System

Using the /etc/default/autofs File to Configure Your autofs Environment

Starting in the Solaris 10 release, you can use the /etc/default/autofs file to configure your autofs environment. Specifically, this file provides an additional way to configure your autofs commands and autofs daemons. The same specifications you would make on the command line can be made in this configuration file. You can make your specifications by providing values to keywords. For more information, refer to /etc/default/autofs File.

The following procedure shows you how to use the /etc/default/autofs file.

ProcedureHow to Use the /etc/default/autofs File

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Add or modify an entry in the /etc/default/autofs file.

    For example, if you want to turn off browsing for all autofs mount points, you could add the following line.


    AUTOMOUNTD_NOBROWSE=ON
    

    This keyword is the equivalent of the -n argument for automountd. For a list of keywords, refer to /etc/default/autofs File.

  3. Restart the autofs daemon.

    Type the following command:


    # svcadm restart system/filesystem/autofs
    

Administrative Tasks Involving Maps

The following tables describe several of the factors you need to be aware of when administering autofs maps. Your choice of map and name service affect the mechanism that you need to use to make changes to the autofs maps.

The following table describes the types of maps and their uses.

Table 5–6 Types of autofs Maps and Their Uses

Type of Map 

Use 

Master

Associates a directory with a map 

Direct

Directs autofs to specific file systems 

Indirect

Directs autofs to reference-oriented file systems 

The following table describes how to make changes to your autofs environment that are based on your name service.

Table 5–7 Map Maintenance

Name Service 

Method 

Local files 

Text editor

NIS 

make files

NIS+ 

nistbladm

The next table tells you when to run the automount command, depending on the modification you have made to the type of map. For example, if you have made an addition or a deletion to a direct map, you need to run the automount command on the local system. By running the command, you make the change effective. However, if you have modified an existing entry, you do not need to run the automount command for the change to become effective.

Table 5–8 When to Run the automount Command

Type of Map 

Restart automount?

 

 

Addition or Deletion 

Modification 

auto_master

Y

Y

direct

Y

N

indirect

N

N

Modifying the Maps

The following procedures show you how to update several types of automounter maps.require that you use NIS+ as your name service.

ProcedureHow to Modify the Master Map

  1. Log in as a user who has permissions to change the maps.

  2. Using the nistbladm command, make your changes to the master map.

    See the System Administration Guide: Naming and Directory Services (NIS+).

  3. For each client, become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  4. For each client, run the automount command to ensure that your changes become effective.

  5. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers. Note that the automount command gathers information from the master map whenever it is run.

ProcedureHow to Modify Indirect Maps

  1. Log in as a user who has permissions to change the maps.

  2. Using the nistbladm command, make your changes to the indirect map.

    See the System Administration Guide: Naming and Directory Services (NIS+). Note that the change becomes effective the next time that the map is used, which is the next time a mount is performed.

ProcedureHow to Modify Direct Maps

  1. Log in as a user who has permissions to change the maps.

  2. Using the nistbladm command, add or delete your changes to the direct map.

    See the System Administration Guide: Naming and Directory Services (NIS+).

  3. Notify your users of the changes.

    Notification is required so that the users can run the automount command as superuser on their own computers, if necessary.


    Note –

    If you only modify or change the contents of an existing direct map entry, you do not need to run the automount command.


    For example, suppose you modify the auto_direct map so that the /usr/src directory is now mounted from a different server. If /usr/src is not mounted at this time, the new entry becomes effective immediately when you try to access /usr/src. If /usr/src is mounted now, you can wait until the auto-unmounting occurs, then access the file.


    Note –

    Use indirect maps whenever possible. Indirect maps are easier to construct and less demanding on the computers' file systems. Also, indirect maps do not occupy as much space in the mount table as direct maps.


Avoiding Mount-Point Conflicts

If you have a local disk partition that is mounted on /src and you plan to use the autofs service to mount other source directories, you might encounter a problem. If you specify the mount point /src, the NFS service hides the local partition whenever you try to reach it.

You need to mount the partition in some other location, for example, on /export/src. You then need an entry in /etc/vfstab such as the following:


/dev/dsk/d0t3d0s5 /dev/rdsk/c0t3d0s5 /export/src ufs 3 yes - 

You also need this entry in auto_src:


terra		terra:/export/src 

terra is the name of the computer.

Accessing Non-NFS File Systems

Autofs can also mount files other than NFS files. Autofs mounts files on removable media, such as diskettes or CD-ROM. Normally, you would mount files on removable media by using the Volume Manager. The following examples show how this mounting could be accomplished through autofs. The Volume Manager and autofs do not work together, so these entries would not be used without first deactivating the Volume Manager.

Instead of mounting a file system from a server, you put the media in the drive and reference the file system from the map. If you plan to access non-NFS file systems and you are using autofs, see the following procedures.

ProcedureHow to Access CD-ROM Applications With Autofs


Note –

Use this procedure if you are not using Volume Manager.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Update the autofs map.

    Add an entry for the CD-ROM file system, which should resemble the following:


    hsfs     -fstype=hsfs,ro     :/dev/sr0

    The CD-ROM device that you intend to mount must appear as a name that follows the colon.

ProcedureHow to Access PC-DOS Data Diskettes With Autofs


Note –

Use this procedure if you are not using Volume Manager.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Update the autofs map.

    Add an entry for the diskette file system such as the following:


     pcfs     -fstype=pcfs     :/dev/diskette

Accessing NFS File Systems Using CacheFS

The cache file system (CacheFS) is a generic nonvolatile caching mechanism. CacheFS improves the performance of certain file systems by utilizing a small, fast local disk. For example, you can improve the performance of the NFS environment by using CacheFS.

CacheFS works differently with different versions of NFS. For example, if both the client and the back file system are running NFS version 2 or version 3, the files are cached in the front file system for access by the client. However, if both the client and the server are running NFS version 4, the functionality is as follows. When the client makes the initial request to access a file from a CacheFS file system, the request bypasses the front (or cached) file system and goes directly to the back file system. With NFS version 4, files are no longer cached in a front file system. All file access is provided by the back file system. Also, since no files are being cached in the front file system, CacheFS-specific mount options, which are meant to affect the front file system, are ignored. CacheFS-specific mount options do not apply to the back file system.


Note –

The first time you configure your system for NFS version 4, a warning appears on the console to indicate that caching is no longer performed.


ProcedureHow to Access NFS File Systems by Using CacheFS

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Run the cfsadmin command to create a cache directory on the local disk.


    # cfsadmin -c /var/cache
    
  3. Add the cachefs entry to the appropriate automounter map.

    For example, adding this entry to the master map caches all home directories:


    /home auto_home -fstype=cachefs,cachedir=/var/cache,backfstype=nfs

    Adding this entry to the auto_home map only caches the home directory for the user who is named rich:


    rich -fstype=cachefs,cachedir=/var/cache,backfstype=nfs dragon:/export/home1/rich

    Note –

    Options that are included in maps that are searched later override options which are set in maps that are searched earlier. The last options that are found are the ones that are used. In the previous example, an additional entry to the auto_home map only needs to include the options in the master maps if some options required changes.


Customizing the Automounter

You can set up the automounter maps in several ways. The following tasks give details about how to customize the automounter maps to provide an easy-to-use directory structure.

Setting Up a Common View of /home

The ideal is for all network users to be able to locate their own or anyone's home directory under /home. This view should be common across all computers, whether client or server.

Every Solaris installation comes with a master map: /etc/auto_master.


# Master map for autofs
#
+auto_master
/net     -hosts     -nosuid,nobrowse
/home    auto_home  -nobrowse

A map for auto_home is also installed under /etc.


# Home directory map for autofs
#
+auto_home

Except for a reference to an external auto_home map, this map is empty. If the directories under /home are to be common to all computers, do not modify this /etc/auto_home map. All home directory entries should appear in the name service files, either NIS or NIS+.


Note –

Users should not be permitted to run setuid executables from their home directories. Without this restriction, any user could have superuser privileges on any computer.


ProcedureHow to Set Up /home With Multiple Home Directory File Systems

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Install home directory partitions under /export/home.

    If the system has several partitions, install the partitions under separate directories, for example, /export/home1 and /export/home2.

  3. Use the Solaris Management Console tools to create and maintain the auto_home map.

    Whenever you create a new user account, type the location of the user's home directory in the auto_home map. Map entries can be simple, for example:


    rusty        dragon:/export/home1/&
    gwenda       dragon:/export/home1/&
    charles      sundog:/export/home2/&
    rich         dragon:/export/home3/&

    Notice the use of the & (ampersand) to substitute the map key. The ampersand is an abbreviation for the second occurrence of rusty in the following example.


    rusty     	dragon:/export/home1/rusty

    With the auto_home map in place, users can refer to any home directory (including their own) with the path /home/user. user is their login name and the key in the map. This common view of all home directories is valuable when logging in to another user's computer. Autofs mounts your home directory for you. Similarly, if you run a remote windowing system client on another computer, the client program has the same view of the /home directory.

    This common view also extends to the server. Using the previous example, if rusty logs in to the server dragon, autofs there provides direct access to the local disk by loopback-mounting /export/home1/rusty onto /home/rusty.

    Users do not need to be aware of the real location of their home directories. If rusty needs more disk space and needs to have his home directory relocated to another server, a simple change is sufficient. You need only change rusty's entry in the auto_home map to reflect the new location. Other users can continue to use the /home/rusty path.

ProcedureHow to Consolidate Project-Related Files Under /ws

Assume that you are the administrator of a large software development project. You plan to make all project-related files available under a directory that is called /ws. This directory is to be common across all workstations at the site.

  1. Add an entry for the /ws directory to the site auto_master map, either NIS or NIS+.


    /ws     auto_ws     -nosuid 

    The auto_ws map determines the contents of the /ws directory.

  2. Add the -nosuid option as a precaution.

    This option prevents users from running setuid programs that might exist in any workspaces.

  3. Add entries to the auto_ws map.

    The auto_ws map is organized so that each entry describes a subproject. Your first attempt yields a map that resembles the following:


    compiler   alpha:/export/ws/&
    windows    alpha:/export/ws/&
    files      bravo:/export/ws/&
    drivers    alpha:/export/ws/&
    man        bravo:/export/ws/&
    tools      delta:/export/ws/&

    The ampersand (&) at the end of each entry is an abbreviation for the entry key. For instance, the first entry is equivalent to the following:


    compiler		alpha:/export/ws/compiler 

    This first attempt provides a map that appears simple, but the map is inadequate. The project organizer decides that the documentation in the man entry should be provided as a subdirectory under each subproject. Also, each subproject requires subdirectories to describe several versions of the software. You must assign each of these subdirectories to an entire disk partition on the server.

    Modify the entries in the map as follows:


    compiler \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /man        bravo:/export/ws/&/man
    windows \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    files \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /vers3.0    bravo:/export/ws/&/vers3.0 \
        /man        bravo:/export/ws/&/man
    drivers \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    tools \
        /           delta:/export/ws/&

    Although the map now appears to be much larger, the map still contains only the five entries. Each entry is larger because each entry contains multiple mounts. For instance, a reference to /ws/compiler requires three mounts for the vers1.0, vers2.0, and man directories. The backslash at the end of each line informs autofs that the entry is continued onto the next line. Effectively, the entry is one long line, though line breaks and some indenting have been used to make the entry more readable. The tools directory contains software development tools for all subprojects, so this directory is not subject to the same subdirectory structure. The tools directory continues to be a single mount.

    This arrangement provides the administrator with much flexibility. Software projects typically consume substantial amounts of disk space. Through the life of the project, you might be required to relocate and expand various disk partitions. If these changes are reflected in the auto_ws map, the users do not need to be notified, as the directory hierarchy under /ws is not changed.

    Because the servers alpha and bravo view the same autofs map, any users who log in to these computers can find the /ws namespace as expected. These users are provided with direct access to local files through loopback mounts instead of NFS mounts.

ProcedureHow to Set Up Different Architectures to Access a Shared Namespace

You need to assemble a shared namespace for local executables, and applications, such as spreadsheet applications and word-processing packages. The clients of this namespace use several different workstation architectures that require different executable formats. Also, some workstations are running different releases of the operating system.

  1. Create the auto_local map.

    See the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP).

  2. Choose a single, site-specific name for the shared namespace.

    This name makes the files and directories that belong to this space easily identifiable. For example, if you choose /usr/local as the name, the path /usr/local/bin is obviously a part of this namespace.

  3. For ease of user community recognition, create an autofs indirect map.

    Mount this map at /usr/local. Set up the following entry in the NIS auto_master map:


    /usr/local     auto_local     -ro

    Notice that the -ro mount option implies that clients cannot write to any files or directories.

  4. Export the appropriate directory on the server.

  5. Include a bin entry in the auto_local map.

    Your directory structure resembles the following:


     bin     aa:/export/local/bin 
  6. (Optional) To serve clients of different architectures, change the entry by adding the autofs CPU variable.


    bin     aa:/export/local/bin/$CPU 
    • For SPARC clients – Place executables in /export/local/bin/sparc.

    • For x86 clients – Place executables in /export/local/bin/i386.

ProcedureHow to Support Incompatible Client Operating System Versions

  1. Combine the architecture type with a variable that determines the operating system type of the client.

    You can combine the autofs OSREL variable with the CPU variable to form a name that determines both CPU type and OS release.

  2. Create the following map entry.


    bin     aa:/export/local/bin/$CPU$OSREL

    For clients that are running version 5.6 of the operating system, export the following file systems:

    • For SPARC clients – Export /export/local/bin/sparc5.6.

    • For x86 clients – Place executables in /export/local/bin/i3865.6.

ProcedureHow to Replicate Shared Files Across Several Servers

The best way to share replicated file systems that are read-only is to use failover. See Client-Side Failover for a discussion of failover.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Modify the entry in the autofs maps.

    Create the list of all replica servers as a comma-separated list, such as the following:


    bin     aa,bb,cc,dd:/export/local/bin/$CPU
    

    Autofs chooses the nearest server. If a server has several network interfaces, list each interface. Autofs chooses the nearest interface to the client, avoiding unnecessary routing of NFS traffic.

ProcedureHow to Apply Autofs Security Restrictions

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create the following entry in the name service auto_master file, either NIS or NIS+:


    /home     auto_home     -nosuid
    

    The nosuid option prevents users from creating files with the setuid or setgid bit set.

    This entry overrides the entry for /home in a generic local /etc/auto_master file. See the previous example. The override happens because the +auto_master reference to the external name service map occurs before the /home entry in the file. If the entries in the auto_home map include mount options, the nosuid option is overwritten. Therefore, either no options should be used in the auto_home map or the nosuid option must be included with each entry.


    Note –

    Do not mount the home directory disk partitions on or under /home on the server.


ProcedureHow to Use a Public File Handle With Autofs

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create an entry in the autofs map such as the following:


    /usr/local     -ro,public    bee:/export/share/local

    The public option forces the public handle to be used. If the NFS server does not support a public file handle, the mount fails.

ProcedureHow to Use NFS URLs With Autofs

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Create an autofs entry such as the following:


    /usr/local     -ro    nfs://bee/export/share/local

    The service tries to use the public file handle on the NFS server. However, if the server does not support a public file handle, the MOUNT protocol is used.

Disabling Autofs Browsability

Starting with the Solaris 2.6 release, the default version of /etc/auto_master that is installed has the -nobrowse option added to the entries for /home and /net. In addition, the upgrade procedure adds the -nobrowse option to the /home and /net entries in /etc/auto_master if these entries have not been modified. However, you might have to make these changes manually or to turn off browsability for site-specific autofs mount points after the installation.

You can turn off the browsability feature in several ways. Disable the feature by using a command-line option to the automountd daemon, which completely disables autofs browsability for the client. Or disable browsability for each map entry on all clients by using the autofs maps in either an NIS or NIS+ namespace. You can also disable the feature for each map entry on each client, using local autofs maps if no network-wide namespace is being used.

ProcedureHow to Completely Disable Autofs Browsability on a Single NFS Client

  1. Become superuser or assume an equivalent role on the NFS client.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Edit the /etc/default/autofs file to include the following keyword and value.


    AUTOMOUNTD_NOBROWSE=TRUE
  3. Restart the autofs service.


    # svcadm restart system/filesystem/autofs
    

ProcedureHow to Disable Autofs Browsability for All Clients

To disable browsability for all clients, you must employ a name service such as NIS or NIS+. Otherwise, you need to manually edit the automounter maps on each client. In this example, the browsability of the /home directory is disabled. You must follow this procedure for each indirect autofs node that needs to be disabled.

  1. Add the -nobrowse option to the /home entry in the name service auto_master file.


    /home     auto_home     -nobrowse
    
  2. Run the automount command on all clients.

    The new behavior becomes effective after you run the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

ProcedureHow to Disable Autofs Browsability on a Selected File System

In this example, browsability of the /net directory is disabled. You can use the same procedure for /home or any other autofs mount points.

  1. Check the automount entry in /etc/nsswitch.conf.

    For local file entries to have precedence, the entry in the name service switch file should list files before the name service. For example:


    automount:  files nis

    This entry shows the default configuration in a standard Solaris installation.

  2. Check the position of the +auto_master entry in /etc/auto_master.

    For additions to the local files to have precedence over the entries in the namespace, the +auto_master entry must be moved to follow /net:


    # Master map for automounter
    #
    /net    -hosts     -nosuid
    /home   auto_home
    /xfn    -xfn
    +auto_master
    

    A standard configuration places the +auto_master entry at the top of the file. This placement prevents any local changes from being used.

  3. Add the nobrowse option to the /net entry in the /etc/auto_master file.


    /net     -hosts     -nosuid,nobrowse
    
  4. On all clients, run the automount command.

    The new behavior becomes effective after running the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

Strategies for NFS Troubleshooting

When tracking an NFS problem, remember the main points of possible failure: the server, the client, and the network. The strategy that is outlined in this section tries to isolate each individual component to find the one that is not working. In all situations, the mountd and nfsd daemons must be running on the server for remote mounts to succeed.

The -intr option is set by default for all mounts. If a program hangs with a server not responding message, you can kill the program with the keyboard interrupt Control-c.

When the network or server has problems, programs that access hard-mounted remote files fail differently than those programs that access soft-mounted remote files. Hard-mounted remote file systems cause the client's kernel to retry the requests until the server responds again. Soft-mounted remote file systems cause the client's system calls to return an error after trying for awhile. Because these errors can result in unexpected application errors and data corruption, avoid soft mounting.

When a file system is hard mounted, a program that tries to access the file system hangs if the server fails to respond. In this situation, the NFS system displays the following message on the console:


NFS server hostname not responding still trying

When the server finally responds, the following message appears on the console:


NFS server hostname ok

A program that accesses a soft-mounted file system whose server is not responding generates the following message:


NFS operation failed for server hostname: error # (error-message)

Note –

Because of possible errors, do not soft-mount file systems with read-write data or file systems from which executables are run. Writable data could be corrupted if the application ignores the errors. Mounted executables might not load properly and can fail.


NFS Troubleshooting Procedures

To determine where the NFS service has failed, you need to follow several procedures to isolate the failure. Check for the following items:

In the process of checking these items, you might notice that other portions of the network are not functioning. For example, the name service or the physical network hardware might not be functioning. The System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) contains debugging procedures for several name services. Also, during the process you might see that the problem is not at the client end. An example is if you get at least one trouble call from every subnet in your work area. In this situation, you should assume that the problem is the server or the network hardware near the server. So, you should start the debugging process at the server, not at the client.

ProcedureHow to Check Connectivity on an NFS Client

  1. Check that the NFS server is reachable from the client. On the client, type the following command.


    % /usr/sbin/ping bee
    bee is alive

    If the command reports that the server is alive, remotely check the NFS server. See How to Check the NFS Server Remotely.

  2. If the server is not reachable from the client, ensure that the local name service is running.

    For NIS+ clients, type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  3. If the name service is running, ensure that the client has received the correct host information by typing the following:


    % /usr/bin/getent hosts bee
    129.144.83.117	bee.eng.acme.com
  4. If the host information is correct, but the server is not reachable from the client, run the ping command from another client.

    If the command run from a second client fails, see How to Verify the NFS Service on the Server.

  5. If the server is reachable from the second client, use ping to check connectivity of the first client to other systems on the local net.

    If this command fails, check the networking software configuration on the client, for example, /etc/netmasks and /etc/nsswitch.conf.

  6. (Optional) Check the output of the rpcinfo command.

    If the rpcinfo command does not display program 100003 version 4 ready and waiting, then NFS version 4 is not enabled on the server. See Table 5–3 for information about enabling NFS version 4.

  7. If the software is correct, check the networking hardware.

    Try to move the client onto a second net drop.

ProcedureHow to Check the NFS Server Remotely

Note that support for both the UDP and the MOUNT protocols is not necessary if you are using an NFS version 4 server.

  1. Check that the NFS services have started on the NFS server by typing the following command:


    % rpcinfo -s bee|egrep 'nfs|mountd'
     100003  3,2    tcp,udp,tcp6,upd6                nfs     superuser
     100005  3,2,1  ticots,ticotsord,tcp,tcp6,ticlts,udp,upd6  mountd  superuser

    If the daemons have not been started, see How to Restart NFS Services.

  2. Check that the server's nfsd processes are responding.

    On the client, type the following command to test the UDP NFS connections from the server.


    % /usr/bin/rpcinfo -u bee nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting

    Note –

    NFS version 4 does not support UDP.


    If the server is running, it prints a list of program and version numbers. Using the -t option tests the TCP connection. If this command fails, proceed to How to Verify the NFS Service on the Server.

  3. Check that the server's mountd is responding, by typing the following command.


    % /usr/bin/rpcinfo -u bee mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Using the -t option tests the TCP connection. If either attempt fails, proceed to How to Verify the NFS Service on the Server.

  4. Check the local autofs service if it is being used:


    % cd /net/wasp
    

    Choose a /net or /home mount point that you know should work properly. If this command fails, then as root on the client, type the following to restart the autofs service:


    # svcadm restart system/filesystem/autofs
    
  5. Verify that file system is shared as expected on the server.


    % /usr/sbin/showmount -e bee
    /usr/src										eng
    /export/share/man						(everyone)

    Check the entry on the server and the local mount entry for errors. Also, check the namespace. In this instance, if the first client is not in the eng netgroup, that client cannot mount the /usr/src file system.

    Check all entries that include mounting information in all the local files. The list includes /etc/vfstab and all the /etc/auto_* files.

ProcedureHow to Verify the NFS Service on the Server

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Check that the server can reach the clients.


    # ping lilac
    lilac is alive
  3. If the client is not reachable from the server, ensure that the local name service is running.

    For NIS+ clients, type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  4. If the name service is running, check the networking software configuration on the server, for example, /etc/netmasks and /etc/nsswitch.conf.

  5. Type the following command to check whether the rpcbind daemon is running.


    # /usr/bin/rpcinfo -u localhost rpcbind
    program 100000 version 1 ready and waiting
    program 100000 version 2 ready and waiting
    program 100000 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. If rpcbind seems to be hung, reboot the server.

  6. Type the following command to check whether the nfsd daemon is running.


    # rpcinfo -u localhost nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    # ps -ef | grep nfsd
    root    232      1  0  Apr 07     ?     0:01 /usr/lib/nfs/nfsd -a 16
    root   3127   2462  1  09:32:57  pts/3  0:00 grep nfsd

    Note –

    NFS version 4 does not support UDP.


    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.

  7. Type the following command to check whether the mountd daemon is running.


    # /usr/bin/rpcinfo -u localhost mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting
    # ps -ef | grep mountd
    root    145      1 0 Apr 07  ?     21:57 /usr/lib/autofs/automountd
    root    234      1 0 Apr 07  ?     0:04  /usr/lib/nfs/mountd
    root   3084 2462 1 09:30:20 pts/3  0:00  grep mountd

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.

ProcedureHow to Restart NFS Services

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services.

  2. Restart the NFS service on the server.

    Type the following command.


    # svcadm restart network/nfs/server
    

Identifying Which Host Is Providing NFS File Service

Run the nfsstat command with the -m option to gather current NFS information. The name of the current server is printed after “currserver=”.


% nfsstat -m
/usr/local from bee,wasp:/export/share/local
 Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,synlink,
		acl,rsize=32768,wsize=32678,retrans=5
 Failover: noresponse=0, failover=0, remap=0, currserver=bee

ProcedureHow to Verify Options Used With the mount Command

In the Solaris 2.6 release and in any versions of the mount command that were patched after the 2.6 release, no warning is issued for invalid options. The following procedure helps determine whether the options that were supplied either on the command line or through /etc/vfstab were valid.

For this example, assume that the following command has been run:


# mount -F nfs -o ro,vers=2 bee:/export/share/local /mnt
  1. Verify the options by running the following command.


    % nfsstat -m
    /mnt from bee:/export/share/local
    Flags:  vers=2,proto=tcp,sec=sys,hard,intr,dynamic,acl,rsize=8192,wsize=8192,
            retrans=5

    The file system from bee has been mounted with the protocol version set to 2. Unfortunately, the nfsstat command does not display information about all of the options. However, using the nfsstat command is the most accurate way to verify the options.

  2. Check the entry in /etc/mnttab.

    The mount command does not allow invalid options to be added to the mount table. Therefore, verify that the options that are listed in the file match those options that are listed on the command line. In this way, you can check those options that are not reported by the nfsstat command.


    # grep bee /etc/mnttab
    bee:/export/share/local /mnt nfs	ro,vers=2,dev=2b0005e 859934818

Troubleshooting Autofs

Occasionally, you might encounter problems with autofs. This section should improve the problem-solving process. The section is divided into two subsections.

This section presents a list of the error messages that autofs generates. The list is divided into two parts:

Each error message is followed by a description and probable cause of the message.

When troubleshooting, start the autofs programs with the verbose (-v) option. Otherwise, you might experience problems without knowing the cause.

The following paragraphs are labeled with the error message you are likely to see if autofs fails, and a description of the possible problem.

Error Messages Generated by automount -v


bad key key in direct map mapname

Description:

While scanning a direct map, autofs has found an entry key without a prefixed /.

Solution:

Keys in direct maps must be full path names.


bad key key in indirect map mapname

Description:

While scanning an indirect map, autofs has found an entry key that contains a /.

Solution:

Indirect map keys must be simple names, not path names.


can't mount server:pathname: reason

Description:

The mount daemon on the server refuses to provide a file handle for server:pathname.

Solution:

Check the export table on the server.


couldn't create mount point mountpoint: reason

Description:

Autofs was unable to create a mount point that was required for a mount. This problem most frequently occurs when you attempt to hierarchically mount all of a server's exported file systems.

Solution:

A required mount point can exist only in a file system that cannot be mounted, which means the file system cannot be exported. The mount point cannot be created because the exported parent file system is exported read-only.


leading space in map entry entry text in mapname

Description:

Autofs has discovered an entry in an automount map that contains leading spaces. This problem is usually an indication of an improperly continued map entry. For example:


fake
/blat   		frobz:/usr/frotz 
Solution:

In this example, the warning is generated when autofs encounters the second line because the first line should be terminated with a backslash (\).


mapname: Not found

Description:

The required map cannot be located. This message is produced only when the -v option is used.

Solution:

Check the spelling and path name of the map name.


remount server:pathname on mountpoint: server not responding

Description:

Autofs has failed to remount a file system that it previously unmounted.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


WARNING: mountpoint already mounted on

Description:

Autofs is attempting to mount over an existing mount point. This message means that an internal error occurred in autofs (an anomaly).

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.

Miscellaneous Error Messages


dir mountpoint must start with '/'

Solution:

The automounter mount point must be given as a full path name. Check the spelling and path name of the mount point.


hierarchical mountpoint: pathname1 and pathname2

Solution:

Autofs does not allow its mount points to have a hierarchical relationship. An autofs mount point must not be contained within another automounted file system.


host server not responding

Description:

Autofs attempted to contact server, but received no response.

Solution:

Check the NFS server status.


hostname: exports: rpc-err

Description:

An error occurred while getting the export list from hostname. This message indicates a server or network problem.

Solution:

Check the NFS server status.


map mapname, key key: bad

Description:

The map entry is malformed, and autofs cannot interpret the entry.

Solution:

Recheck the entry. Perhaps the entry has characters that need to be escaped.


mapname: nis-err

Description:

An error occurred when looking up an entry in a NIS map. This message can indicate NIS problems.

Solution:

Check the NIS server status.


mount of server:pathname on mountpoint:reason

Description:

Autofs failed to do a mount. This occurrence can indicate a server or network problem. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


mountpoint: Not a directory

Description:

Autofs cannot mount itself on mountpoint because it is not a directory.

Solution:

Check the spelling and path name of the mount point.


nfscast: cannot send packet: reason

Description:

Autofs cannot send a query packet to a server in a list of replicated file system locations. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


nfscast: cannot receive reply: reason

Description:

Autofs cannot receive replies from any of the servers in a list of replicated file system locations. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


nfscast: select: reason

Description:

All these error messages indicate problems in attempting to check servers for a replicated file system. This message can indicate a network problem. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


pathconf: no info for server:pathname

Description:

Autofs failed to get pathconf information for the path name.

Solution:

See the fpathconf(2) man page.


pathconf: server: server not responding

Description:

Autofs is unable to contact the mount daemon on server that provides the information to pathconf().

Solution:

Avoid using the POSIX mount option with this server.

Other Errors With Autofs

If the /etc/auto* files have the execute bit set, the automounter tries to execute the maps, which creates messages such as the following:

/etc/auto_home: +auto_home: not found

In this situation, the auto_home file has incorrect permissions. Each entry in the file generates an error message that is similar to this message. The permissions to the file should be reset by typing the following command:


# chmod 644 /etc/auto_home

NFS Error Messages

This section shows an error message that is followed by a description of the conditions that should create the error and at minimum one remedy.


Bad argument specified with index option - must be a file

Solution:

You must include a file name with the index option. You cannot use directory names.


Cannot establish NFS service over /dev/tcp: transport setup problem

Description:

This message is often created when the services information in the namespace has not been updated. The message can also be reported for UDP.

Solution:

To fix this problem, you must update the services data in the namespace.

For NIS+, the entries should be as follows:


nfsd nfsd tcp 2049 NFS server daemon
nfsd nfsd udp 2049 NFS server daemon

For NIS and /etc/services, the entries should be as follows:


nfsd    2049/tcp    nfs    # NFS server daemon
nfsd    2049/udp    nfs    # NFS server daemon

Cannot use index option without public option

Solution:

Include the public option with the index option in the share command. You must define the public file handle in order for the index option to work.


Note –

The Solaris 2.5.1 release required that the public file handle be set by using the share command. A change in the Solaris 2.6 release sets the public file handle to be root (/) by default. This error message is no longer relevant.



Could not start daemon: error

Description:

This message is displayed if the daemon terminates abnormally or if a system call error occurs. The error string defines the problem.

Solution:

Contact Sun for assistance. This error message is rare and has no straightforward solution.


Could not use public filehandle in request to server

Description:

This message is displayed if the public option is specified but the NFS server does not support the public file handle. In this situation, the mount fails.

Solution:

To remedy this situation, either try the mount request without using the public file handle or reconfigure the NFS server to support the public file handle.


daemon running already with pid pid

Description:

The daemon is already running.

Solution:

If you want to run a new copy, kill the current version and start a new version.


error locking lock file

Description:

This message is displayed when the lock file that is associated with a daemon cannot be locked properly.

Solution:

Contact Sun for assistance. This error message is rare and has no straightforward solution.


error checking lock file: error

Description:

This message is displayed when the lock file that is associated with a daemon cannot be opened properly.

Solution:

Contact Sun for assistance. This error message is rare and has no straightforward solution.


NOTICE: NFS3: failing over from host1 to host2

Description:

This message is displayed on the console when a failover occurs. The message is advisory only.

Solution:

No action required.


filename: File too large

Description:

An NFS version 2 client is trying to access a file that is over 2 Gbytes.

Solution:

Avoid using NFS version 2. Mount the file system with version 3 or version 4. Also, see the description of the nolargefiles option in mount Options for NFS File Systems.


mount: ... server not responding:RPC_PMAP_FAILURE - RPC_TIMED_OUT

Description:

The server that is sharing the file system you are trying to mount is down or unreachable, at the wrong run level, or its rpcbind is dead or hung.

Solution:

Wait for the server to reboot. If the server is hung, reboot the server.


mount: ... server not responding: RPC_PROG_NOT_REGISTERED

Description:

The mount request registered with rpcbind, but the NFS mount daemon mountd is not registered.

Solution:

Wait for the server to reboot. If the server is hung, reboot the server.


mount: ... No such file or directory

Description:

Either the remote directory or the local directory does not exist.

Solution:

Check the spelling of the directory names. Run ls on both directories.


mount: ...: Permission denied

Description:

Your computer name might not be in the list of clients or netgroup that is allowed access to the file system you tried to mount.

Solution:

Use showmount -e to verify the access list.


NFS file temporarily unavailable on the server, retrying ...

Description:

An NFS version 4 server can delegate the management of a file to a client. This message indicates that the server is recalling a delegation for another client that conflicts with a request from your client.

Solution:

The recall must occur before the server can process your client's request. For more information about delegation, refer to Delegation in NFS Version 4.


NFS fsstat failed for server hostname: RPC: Authentication error

Description:

This error can be caused by many situations. One of the most difficult situations to debug is when this problem occurs because a user is in too many groups. Currently, a user can be in no more than 16 groups if the user is accessing files through NFS mounts.

Solution:

An alternate does exist for users who need to be in more than 16 groups. You can use access control lists to provide the needed access privileges if you run at minimum the Solaris 2.5 release on the NFS server and the NFS clients.


nfs mount: ignoring invalid option “-option

Description:

The -option flag is not valid.

Solution:

Refer to the mount_nfs(1M) man page to verify the required syntax.


Note –

This error message is not displayed when running any version of the mount command that is included in a Solaris release from 2.6 to the current release or in earlier versions that have been patched.



nfs mount: NFS can't support “nolargefiles”

Description:

An NFS client has attempted to mount a file system from an NFS server by using the -nolargefiles option.

Solution:

This option is not supported for NFS file system types.


nfs mount: NFS V2 can't support “largefiles”

Description:

The NFS version 2 protocol cannot handle large files.

Solution:

You must use version 3 or version 4 if access to large files is required.


NFS server hostname not responding still trying

Description:

If programs hang while doing file-related work, your NFS server might have failed. This message indicates that NFS server hostname is down or that a problem has occurred with the server or the network.

Solution:

If failover is being used, hostname is a list of servers. Start troubleshooting with How to Check Connectivity on an NFS Client.


NFS server recovering

Description:

During part of the NFS version 4 server reboot, some operations were not permitted. This message indicates that the client is waiting for the server to permit this operation to proceed.

Solution:

No action required. Wait for the server to permit the operation.


Permission denied

Description:

This message is displayed by the ls -l, getfacl, and setfacl commands for the following reasons:

  • If the user or group that exists in an access control list (ACL) entry on an NFS version 4 server cannot be mapped to a valid user or group on an NFS version 4 client, the user is not allowed to read the ACL on the client.

  • If the user or group that exists in an ACL entry that is being set on an NFS version 4 client cannot be mapped to a valid user or group on an NFS version 4 server, the user is not allowed to write or modify an ACL on the client.

  • If an NFS version 4 client and server have mismatched NFSMAPID_DOMAIN values, ID mapping fails.

For more information, see ACLs and nfsmapid in NFS Version 4.

Solution:

Do the following:

  • Make sure that all user and group IDs in the ACL entries exist on both the client and server.

  • Make sure that the value for NFSMAPID_DOMAIN is set correctly in the /etc/default/nfs file. For more information, see Keywords for the /etc/default/nfs File.

To determine if any user or group cannot be mapped on the server or client, use the script that is provided in Checking for Unmapped User or Group IDs.


port number in nfs URL not the same as port number in port option

Description:

The port number that is included in the NFS URL must match the port number that is included with the -port option to mount. If the port numbers do not match, the mount fails.

Solution:

Either change the command to make the port numbers identical or do not specify the port number that is incorrect. Usually, you do not need to specify the port number with both the NFS URL and the -port option.


replicas must have the same version

Description:

For NFS failover to function properly, the NFS servers that are replicas must support the same version of the NFS protocol.

Solution:

Running multiple versions is not allowed.


replicated mounts must be read-only

Description:

NFS failover does not work on file systems that are mounted read-write. Mounting the file system read-write increases the likelihood that a file could change.

Solution:

NFS failover depends on the file systems being identical.


replicated mounts must not be soft

Description:

Replicated mounts require that you wait for a timeout before failover occurs.

Solution:

The soft option requires that the mount fail immediately when a timeout starts, so you cannot include the -soft option with a replicated mount.


share_nfs: Cannot share more than one filesystem with 'public' option

Solution:

Check that the /etc/dfs/dfstab file has only one file system selected to be shared with the -public option. Only one public file handle can be established per server, so only one file system per server can be shared with this option.


WARNING: No network locking on hostname:path: contact admin to install server change

Description:

An NFS client has unsuccessfully attempted to establish a connection with the network lock manager on an NFS server. Rather than fail the mount, this warning is generated to warn you that locking does not work.

Solution:

Upgrade the server with a new version of the OS that provides complete lock manager support.