System Administration Guide: Network Services

Part II Accessing Network File Systems Topics

This section provides overview, task, and reference information for the NFS service.

Chapter 4 Managing Network File Systems (Overview)

This chapter provides an overview of the NFS service, which can be used to access file systems over the network. The chapter includes a discussion of the concepts necessary to understand the NFS service and a description of the latest features in NFS and autofs.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Virtualization Using the Solaris Operating System for more information.


What's New With the NFS Service

This section provides information about new features in releases of the Solaris OS.

Changes in Solaris Express, Developer Edition 1/08

The Solaris Express, Developer Edition 1/08 release provides support for mirrormounts that enable an NFSv4 client to traverse shared file system mount points in the server namespace. The main advantage that mirrormounts offer over the traditional automounter is that mounting a file system using mirrormounts does not require the overhead associated with administering automount maps. Mirrormounts provide these features:

For more information about mirrormounts, refer to the following:

Changes in the Solaris Express, Developer Edition 2/07 Release

The Solaris Express, Developer Edition 2/07 release provides support for two utilities that enable you to manage file systems and file-sharing protocols:

For information about all the new features in the Solaris Express, Developer Edition 2/07 release, see Solaris Express Developer Editicon What’s New.

Changes in the Solaris 10 11/06 Release

The Solaris 10 11/06 release provides support for a file system monitoring tool. See the following:

Additionally, this Guide provides a more detailed description of the nfsmapid daemon. For information about nfsmapid, see the following:

For a complete list of new features in the Solaris 10 11/06 release, see Solaris Express Developer Editicon What’s New.

Changes in the Solaris Express 5/06 Release

Starting in the Solaris Express 5/06 release, the NFS version 4 domain can be defined during the installation of the Solaris OS. For more information, see the following:

For a complete list of the new features in the Solaris Express release, see Solaris Express Developer Editicon What’s New.

Changes in the Solaris 10 Release

Starting in the Solaris 10 release, NFS version 4 is the default. For information about features in NFS version 4 and other changes, refer to the following:

Also, see the following:

Additionally, the NFS service is managed by the Service Management Facility. Administrative actions on this service, such as enabling, disabling, or restarting, can be performed by using the svcadm command. The service's status can be queried by using the svcs command. For more information about the Service Management Facility, refer to the smf(5) man page and Chapter 15, Managing Services (Overview), in System Administration Guide: Basic Administration.

NFS Terminology

This section presents some of the basic terminology that must be understood to work with the NFS service. Expanded coverage of the NFS service is included in Chapter 6, Accessing Network File Systems (Reference).

NFS Servers and Clients

The terms client and server are used to describe the roles that a computer assumes when sharing file systems. Computers that share their file systems over a network are acting as servers. The computers that are accessing the file systems are said to be clients. The NFS service enables any computer to access any other computer's file systems. At the same time, the NFS service provides access to its own file systems. A computer can assume the role of client, server, or both client and server at any particular time on a network.

Clients access files on the server by mounting the server's shared file systems. When a client mounts a remote file system, the client does not make a copy of the file system. Rather, the mounting process uses a series of remote procedure calls that enable the client to access the file system transparently on the server's disk. The mount resembles a local mount. Users type commands as if the file systems were local. See Mounting File Systems for information about tasks that mount file systems.

After a file system has been shared on a server through an NFS operation, the file system can be accessed from a client. You can mount an NFS file system automatically with autofs. See Automatic File-System Sharing and Task Overview for Autofs Administration for tasks that involve the share command and autofs.

NFS File Systems

The objects that can be shared with the NFS service include any whole or partial directory tree or a file hierarchy, including a single file. A computer cannot share a file hierarchy that overlaps a file hierarchy that is already shared. Peripheral devices such as modems and printers cannot be shared.

In most UNIX system environments, a file hierarchy that can be shared corresponds to a file system or to a portion of a file system. However, NFS support works across operating systems, and the concept of a file system might be meaningless in other, non-UNIX environments. Therefore, the term file system refers to a file or file hierarchy that can be shared and be mounted with NFS.

About the NFS Service

The NFS service enables computers of different architectures that run different operating systems to share file systems across a network. NFS support has been implemented on many platforms that range from the MS-DOS to the VMS operating systems.

The NFS environment can be implemented on different operating systems because NFS defines an abstract model of a file system, rather than an architectural specification. Each operating system applies the NFS model to its file-system semantics. This model means that file system operations such as reading and writing function as though the operations are accessing a local file.

The NFS service has the following benefits:

The NFS service makes the physical location of the file system irrelevant to the user. You can use the NFS implementation to enable users to see all the relevant files regardless of location. Instead of placing copies of commonly used files on every system, the NFS service enables you to place one copy on one computer's disk. All other systems access the files across the network. Under NFS operation, remote file systems are almost indistinguishable from local file systems.

About Autofs

File systems that are shared through the NFS service can be mounted by using automatic mounting. Autofs, a client-side service, is a file-system structure that provides automatic mounting. The autofs file system is initialized by automount, which is run automatically when a system is booted. The automount daemon, automountd, runs continuously, mounting and unmounting remote directories as necessary.

Whenever a client computer that is running automountd tries to access a remote file or remote directory, the daemon mounts the remote file system. This remote file system remains mounted for as long as needed. If the remote file system is not accessed for a certain period of time, the file system is automatically unmounted.

Mounting need not be done at boot time, and the user no longer has to know the superuser password to mount a directory. Users do not need to use the mount and umount commands. The autofs service mounts and unmounts file systems as required without any intervention by the user.

Mounting some file hierarchies with automountd does not exclude the possibility of mounting other hierarchies with mount. A diskless computer must mount / (root), /usr, and /usr/kvm through the mount command and the /etc/vfstab file.

Task Overview for Autofs Administration and How Autofs Works give more specific information about the autofs service.

Features of the NFS Service

This section describes the important features that are included in the NFS service.

NFS Version 2 Protocol

Version 2 was the first version of the NFS protocol in wide use. Version 2 continues to be available on a large variety of platforms. All Solaris releases support version 2 of the NFS protocol, but Solaris releases prior to Solaris 2.5 support version 2 only.

NFS Version 3 Protocol

An implementation of NFS version 3 protocol was a new feature of the Solaris 2.5 release. Several changes have been made to improve interoperability and performance. For optimal use, the version 3 protocol must be running on both the NFS servers and clients.

Unlike the NFS version 2 protocol, the NFS version 3 protocol can handle files that are larger than 2 Gbytes. The previous limitation has been removed. See NFS Large File Support.

The NFS version 3 protocol enables safe asynchronous writes on the server, which improve performance by allowing the server to cache client write requests in memory. The client does not need to wait for the server to commit the changes to disk, so the response time is faster. Also, the server can batch the requests, which improves the response time on the server.

Many Solaris NFS version 3 operations return the file attributes, which are stored in the local cache. Because the cache is updated more often, the need to do a separate operation to update this data arises less often. Therefore, the number of RPC calls to the server is reduced, improving performance.

The process for verifying file access permissions has been improved. Version 2 generated a “write error” message or a “read error” message if users tried to copy a remote file without the appropriate permissions. In version 3, the permissions are checked before the file is opened, so the error is reported as an “open error.”

The NFS version 3 protocol removed the 8-Kbyte transfer size limit. Clients and servers could negotiate whatever transfer size the clients and servers support, rather than conform to the 8-Kbyte limit that version 2 imposed. Note that in the Solaris 2.5 implementation, the protocol defaulted to a 32-Kbyte transfer size. Starting in the Solaris 10 release, restrictions on wire transfer sizes are relaxed. The transfer size is based on the capabilities of the underlying transport.

NFS Version 4 Protocol

NFS version 4 has features that are not available in the previous versions:

The NFS version 4 protocol represents the user ID and the group ID as strings. nfsmapid is used by the client and the server to do the following:

For more information, refer to nfsmapid Daemon.

Note that in NFS version 4, the ID mapper, nfsmapid, is used to map user or group IDs in ACL entries on a server to user or group IDs in ACL entries on a client. The reverse is also true. For more information, see ACLs and nfsmapid in NFS Version 4.

With NFS version 4, when you unshare a file system, all the state for any open files or file locks in that file system is destroyed. In NFS version 3 the server maintained any locks that the clients had obtained before the file system was unshared. For more information, refer to Unsharing and Resharing a File System in NFS Version 4.

NFS version 4 servers use a pseudo file system to provide clients with access to exported objects on the server. Prior to NFS version 4 a pseudo file system did not exist. For more information, refer to File-System Namespace in NFS Version 4.

In NFS version 2 and version 3 the server returned persistent file handles. NFS version 4 supports volatile file handles. For more information, refer to Volatile File Handles in NFS Version 4.

Delegation, a technique by which the server delegates the management of a file to a client, is supported on both the client and the server. For example, the server could grant either a read delegation or a write delegation to a client. For more information, refer to Delegation in NFS Version 4.

Starting in the Solaris 10 release, NFS version 4 does not support the LIPKEY/SPKM security flavor.

Also, NFS version 4 does not use the following daemons:

For a complete list of the features in NFS version 4, refer to Features in NFS Version 4.

For procedural information that is related to using NFS version 4, refer to Setting Up NFS Services.

Controlling NFS Versions

The /etc/default/nfs file has keywords to control the NFS protocols that are used by both the client and the server. For example, you can use keywords to manage version negotiation. For more information, refer to Keywords for the /etc/default/nfs File or the nfs(4) man page.

NFS ACL Support

Access control list (ACL) support was added in the Solaris 2.5 release. ACLs provide a finer-grained mechanism to set file access permissions than is available through standard UNIX file permissions. NFS ACL support provides a method of changing and viewing ACL entries from a Solaris NFS client to a Solaris NFS server. See Using Access Control Lists to Protect Files in System Administration Guide: Security Services for more information about ACLs.

For information about support for ACLs in NFS version 4, see ACLs and nfsmapid in NFS Version 4.

NFS Over TCP

The default transport protocol for the NFS protocol was changed to the Transport Control Protocol (TCP) in the Solaris 2.5 release. TCP helps performance on slow networks and wide area networks. TCP also provides congestion control and error recovery. NFS over TCP works with version 2, version 3, and version 4. Prior to the Solaris 2.5 release, the default NFS protocol was User Datagram Protocol (UDP).


Note –

Starting in the Solaris 10 release, if RDMA for InfiniBand is available, RDMA is the default transport protocol for NFS. For more information, see NFS Over RDMA. Note, however, that if you use the proto=tcp mount option, NFS mounts are forced to use TCP only.


NFS Over UDP

Starting in the Solaris 10 release, the NFS client no longer uses an excessive number of UDP ports. Previously, NFS transfers over UDP used a separate UDP port for each outstanding request. Now, by default, the NFS client uses only one UDP reserved port. However, this support is configurable. If the use of more simultaneous ports would increase system performance through increased scalability, then the system can be configured to use more ports. This capability also mirrors the NFS over TCP support, which has had this kind of configurability since its inception. For more information, refer to the Solaris Tunable Parameters Reference Manual.


Note –

NFS version 4 does not use UDP. If you mount a file system with the proto=udp option, then NFS version 3 is used instead of version 4.


Overview of NFS Over RDMA

Starting in the Solaris 10 release, the default transport for NFS is the Remote Direct Memory Access (RDMA) protocol, which is a technology for memory-to-memory transfer of data over high speed networks. Specifically, RDMA provides remote data transfer directly to and from memory without CPU intervention. To provide this capability, RDMA combines the interconnect I/O technology of InfiniBand-on-SPARC platforms with the Solaris Operating System. For more information, refer to NFS Over RDMA.

Network Lock Manager and NFS

The Solaris 2.5 release also included an improved version of the network lock manager. The network lock manager provided UNIX record locking and PC file sharing for NFS files. The locking mechanism is now more reliable for NFS files, so commands that use locking are less likely to hang.


Note –

The Network Lock Manager is used only for NFS version 2 and version 3 mounts. File locking is built into the NFS version 4 protocol.


NFS Large File Support

The Solaris 2.6 implementation of the NFS version 3 protocol was changed to correctly manipulate files that were larger than 2 Gbytes. The NFS version 2 protocol and the Solaris 2.5 implementation of the version 3 protocol could not handle files that were larger than 2 Gbytes.

NFS Client Failover

Dynamic failover of read-only file systems was added in the Solaris 2.6 release. Failover provides a high level of availability for read-only resources that are already replicated, such as man pages, other documentation, and shared binaries. Failover can occur anytime after the file system is mounted. Manual mounts can now list multiple replicas, much like the automounter in previous releases. The automounter has not changed, except that failover need not wait until the file system is remounted. See How to Use Client-Side Failover and Client-Side Failover for more information.

Kerberos Support for the NFS Service

Support for Kerberos V4 clients was included in the Solaris 2.0 release. In the 2.6 release, the mount and share commands were altered to support NFS version 3 mounts that use Kerberos V5 authentication. Also, the share command was changed to enable multiple authentication flavors for different clients. See RPCSEC_GSS Security Flavor for more information about changes that involve security flavors. See Configuring Kerberos NFS Servers in System Administration Guide: Security Services for information about Kerberos V5 authentication.

WebNFS Support

The Solaris 2.6 release also included the ability to make a file system on the Internet accessible through firewalls. This capability was provided by using an extension to the NFS protocol. One of the advantages to using the WebNFSTM protocol for Internet access is its reliability. The service is built as an extension of the NFS version 3 and version 2 protocol. Additionally, the WebNFS implementation provides the ability to share these files without the administrative overhead of an anonymous ftp site. See Security Negotiation for the WebNFS Service for a description of more changes that are related to the WebNFS service. See WebNFS Administration Tasks for more task information.


Note –

The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.


RPCSEC_GSS Security Flavor

A security flavor, called RPCSEC_GSS, is supported in the Solaris 7 release. This flavor uses the standard GSS-API interfaces to provide authentication, integrity, and privacy, as well as enabling support of multiple security mechanisms. See Kerberos Support for the NFS Service for more information about support of Kerberos V5 authentication. See Solaris Security for Developers Guide for more information about GSS-API.

Solaris 7 Extensions for NFS Mounting

The Solaris 7 release includes extensions to the mount command and automountd command. The extensions enable the mount request to use the public file handle instead of the MOUNT protocol. The MOUNT protocol is the same access method that the WebNFS service uses. By circumventing the MOUNT protocol, the mount can occur through a firewall. Additionally, because fewer transactions need to occur between the server and the client, the mount should occur faster.

The extensions also enable NFS URLs to be used instead of the standard path name. Also, you can use the public option with the mount command and the automounter maps to force the use of the public file handle. See WebNFS Support for more information about changes to the WebNFS service.

Security Negotiation for the WebNFS Service

A new protocol has been added to enable a WebNFS client to negotiate a security mechanism with an NFS server in the Solaris 8 release. This protocol provides the ability to use secure transactions when using the WebNFS service. See How WebNFS Security Negotiation Works for more information.

NFS Server Logging

In the Solaris 8 release, NFS server logging enables an NFS server to provide a record of file operations that have been performed on its file systems. The record includes information about which file was accessed, when the file was accessed, and who accessed the file. You can specify the location of the logs that contain this information through a set of configuration options. You can also use these options to select the operations that should be logged. This feature is particularly useful for sites that make anonymous FTP archives available to NFS and WebNFS clients. See How to Enable NFS Server Logging for more information.


Note –

NFS version 4 does not support server logging.


Autofs Features

Autofs works with file systems that are specified in the local namespace. This information can be maintained in NIS, NIS+, or local files.

A fully multithreaded version of automountd was included in the Solaris 2.6 release. This enhancement makes autofs more reliable and enables concurrent servicing of multiple mounts, which prevents the service from hanging if a server is unavailable.

The new automountd also provides better on-demand mounting. Previous releases would mount an entire set of file systems if the file systems were hierarchically related. Now, only the top file system is mounted. Other file systems that are related to this mount point are mounted when needed.

The autofs service supports browsability of indirect maps. This support enables a user to see which directories could be mounted, without having to actually mount each file system. A -nobrowse option has been added to the autofs maps so that large file systems, such as /net and /home, are not automatically browsable. Also, you can turn off autofs browsability on each client by using the -n option with automount. See Disabling Autofs Browsability for more information.

Chapter 5 Network File System Administration (Tasks)

This chapter provides information about how to perform such NFS administration tasks as setting up NFS services, adding new file systems to share, and mounting file systems. The chapter also covers the use of the Secure NFS system and the use of WebNFS functionality. The last part of the chapter includes troubleshooting procedures and a list of some of the NFS error messages and their meanings.

Your responsibilities as an NFS administrator depend on your site's requirements and the role of your computer on the network. You might be responsible for all the computers on your local network, in which instance you might be responsible for determining these configuration items:

Maintaining a server after it has been set up involves the following tasks:

Remember, a computer can be both a server and a client. So, a computer can be used to share local file systems with remote computers and to mount remote file systems.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Virtualization Using the Solaris Operating System for more information.


Automatic File-System Sharing

Servers provide access to their file systems by sharing the file systems over the NFS environment. Note the following:

Table 5–1 File-System Sharing Task Map

Task 

Description 

For Instructions 

Establish automatic file-system sharing 

Steps to configure a server so that file systems are automatically shared when the server is rebooted 


Note –

The procedure shows you how to use the sharemgr command. The example that follows the procedure uses the share and shareall commands to complete the same task.


How to Set Up Automatic File-System Sharing

Enable WebNFS 

Steps to configure a server so that users can access files by using WebNFS 


Note –

The procedure shows you how to use the sharemgr command. The example that follows the procedure uses the share and shareall commands to complete the same task.


How to Enable WebNFS Access

Enable NFS server logging 

Steps to configure a server so that NFS logging is run on selected file systems 


Note –

The procedure shows you how to use the sharemgr command. The example that follows the procedure uses the share and shareall commands to complete the same task.


How to Enable NFS Server Logging

ProcedureHow to Set Up Automatic File-System Sharing

Starting with the Solaris Express, Developer Edition 2/07 release, you can do the following:


Note –

When you use sharemgr, you do not need to use the share, shareall, and unshare commands. Also, you do not need to edit the /etc/dfs/dfstab file.


The following procedure uses the sharemgr utility. If you prefer to use the share and shareall utilities, see the example that follows this procedure. Note that whether you use sharemgr or share and shareall, you must set up your autofs maps so that clients can access the file systems that you have shared on the server.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Use the sharemgr utility to assign property values to the file system to be shared.

    1. Use the following syntax to create a share group with the desired property value.


      # sharemgr create [-P protocol] [-p property=value] share-group
    2. Use the following syntax to add shares to the share group.


      # sharemgr add-share -s share-path [-t] [-d description] [-r resource-name] share-group
    3. (Optional) If necessary, use the following syntax to set more property values to an existing share group.


      # sharemgr set [-P protocol] [-S security-mode] [-p property=value] share-group

      Note –

      You do not need to repeat this command-line syntax for each additional property value. You can use the -p option multiple times to define multiple properties on the same command line.


  3. Use the sharemgr utility to verify what you have created by using the following syntax.


    # sharemgr show [-v] [-p] [-x] [share-group...]

Example 5–1 How to Use the share and shareall Commands to Set Up Automatic File-System Sharing

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add entries for each file system to be shared.

    Edit /etc/dfs/dfstab. Add one entry to the file for every file system that you want to be automatically shared. Each entry must be on a line by itself in the file and use this syntax:


    share [-F nfs] [-o specific-options] [-d description] pathname

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  3. Share the file system.

    After the entry is in /etc/dfs/dfstab, you can share the file system by either rebooting the system or by using the shareall command.


    # shareall
    
  4. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public  ""

See Also

Whether you use sharemgr or share and shareall, the next step is to set up your autofs maps so that clients can access the file systems that you have shared on the server. See Task Overview for Autofs Administration.

ProcedureHow to Enable WebNFS Access

Note the following:

See Planning for WebNFS Access for a list of issues to consider before starting the WebNFS service.

The following procedure uses the sharemgr utility. If you prefer to use the share and shareall utilities, see the example that follows this procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Use the sharemgr utility to assign property values to the file system to be shared by the WebNFS service.

    1. Use the following syntax to create a share group with the desired property value.


      # sharemgr create [-P protocol] [-p property=value] share-group

      For example:

      • To create a share group that forces a specific HTML file to be loaded, you can use the index property:


        # sharemgr create [-P protocol] -p index=[file-path.html] share-group
      • To create a share group that moves the location of the public file handle from root (/) to an exported directory for WebNFS-enabled browsers and clients, you can use the following:


        # sharemgr set -P nfs -p public=true -s share-path share-group

        Note that the public property moves the location of a public file handle from root (/) to an exported directory for WebNFS-enabled browsers and clients. However, only one file system (or share) on each server can use this property. Because a share-group can consist of more than one file system, this property is not accepted by a share group. For more information, see the share_nfs(1M) man page.

    2. Use the following syntax to add shares to the share group.


      # sharemgr add-share -s share-path [-t] [-d description] [-r resource-name] share-group
    3. (Optional) If necessary, use the following syntax to set more property values to an existing share group.


      # sharemgr set [-P protocol] [-S security-mode] [-p property=value] share-group

      Note –

      You do not need to repeat this command-line syntax for each additional property value. You can use the -p option multiple times to define multiple properties on the same command line.


  3. Use the sharemgr utility to verify what you have created.


    # sharemgr show [-v] [-p] [-x] [share-group...]

Example 5–2 How to Use the share and shareall Commands to Enable WebNFS Access

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add entries for each file system to be shared by using the WebNFS service.

    Edit /etc/dfs/dfstab. Add one entry to the file for every file system. The public and index tags that are shown in the following example are optional.


    share -F nfs -o ro,public,index=index.html /export/ftp

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  3. Share the file system.

    After the entry is in /etc/dfs/dfstab, you can share the file system by either rebooting the system or by using the shareall command.


    # shareall
    
  4. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,public,index=index.html  ""

ProcedureHow to Enable NFS Server Logging

Starting with the Solaris Express, Developer Edition 2/07 release, you can do the following:


Note –

When you use sharemgr, you do not need to use the share, shareall, and unshare commands. Also, you do not need to edit the /etc/dfs/dfstab file.


The following procedure uses the sharemgr utility. If you prefer to use the share and shareall utilities, see the example that follows this procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. (Optional) Change file-system configuration settings.

    In /etc/nfs/nfslog.conf, you can change the settings in one of two ways. You can edit the default settings for all file systems by changing the data that is associated with the global tag. Alternately, you can add a new tag for this file system. If these changes are not needed, you do not need to change this file. The format of /etc/nfs/nfslog.conf is described in the nfslog.conf(4) man page.

  3. Use the sharemgr utility to assign property values to the file system to be shared by using NFS server logging.

    1. Use the following syntax to create a share group with the desired property value.


      # sharemgr create [-P protocol] [-p property=value] share-group

      For example:


      # sharemgr create -p log=global my-group
      

      This example uses the default settings associated with the global tag. Note that the tag assigned to the log property must also exist in the /etc/nfs/nfslog.conf file.

    2. Use the following syntax to add shares to the share group.


      # sharemgr add-share -s share-path [-t] [-d description] [-r resource-name] share-group
    3. (Optional) If necessary, use the following syntax to set more property values to an existing share group.


      # sharemgr set [-P protocol] [-S security-mode] [-p property=value] share-group

      For example:


      # sharemgr set -p ro=true my-group
      

      In this example the permissions for my-group are set to read-only.


      Note –

      You do not need to repeat this command-line syntax for each additional property value. You can use the -p option multiple times to define multiple properties on the same command line.


  4. Use the following syntax to verify what you have created.


    # sharemgr show [-v] [-p] [-x] [share-group...]
  5. Check if nfslogd, the NFS log daemon, is running.


    # ps -ef | grep nfslogd
    
  6. (Optional) Start nfslogd, if it is not running.


    # svcadm restart network/nfs/server:default
    

Example 5–3 How to Use the share and shareall Commands to Enable NFS Server Logging

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. (Optional) Change file-system configuration settings.

    In /etc/nfs/nfslog.conf, you can change the settings in one of two ways. You can edit the default settings for all file systems by changing the data that is associated with the global tag. Alternately, you can add a new tag for this file system. If these changes are not needed, you do not need to change this file. The format of /etc/nfs/nfslog.conf is described in nfslog.conf(4).

  3. Add entries for each file system to be shared by using NFS server logging.

    Edit /etc/dfs/dfstab. Add one entry to the file for the file system on which you are enabling NFS server logging. The tag that is used with the log=tag option must be entered in /etc/nfs/nfslog.conf. This example uses the default settings in the global tag.


    share -F nfs -o ro,log=global /export/ftp

    See the dfstab(4) man page for a description of /etc/dfs/dfstab and the share_nfs(1M) man page for a complete list of options.

  4. Share the file system.

    After the entry is in /etc/dfs/dfstab, you can share the file system by either rebooting the system or by using the shareall command.


    # shareall
    
  5. Verify that the information is correct.

    Run the share command to check that the correct options are listed:


    # share
    -        /export/share/man   ro   ""
    -        /usr/src     rw=eng   ""
    -        /export/ftp    ro,log=global  ""
  6. Check if nfslogd, the NFS log daemon, is running.


    # ps -ef | grep nfslogd
    
  7. (Optional) Start nfslogd, if it is not running already.

    • (Optional) If /etc/nfs/nfslogtab is present, start the NFS log daemon by typing the following:


      # svcadm restart network/nfs/server:default
      
    • (Optional) If /etc/nfs/nfslogtab is not present, run any of the share commands to create the file and then start the daemon.


      # shareall
      # svcadm restart network/nfs/server:default
      

Mounting File Systems

You can mount file systems in several ways. File systems can be mounted automatically when the system is booted, on demand from the command line, or through the automounter. The automounter provides many advantages to mounting at boot time or mounting from the command line. However, many situations require a combination of all three methods. Additionally, several ways of enabling or disabling processes exist, depending on the options you use when mounting the file system. See the following table for a complete list of the tasks that are associated with file-system mounting.

Table 5–2 Task Map for Mounting File Systems

Task 

Description 

For Instructions 

Mount a file system at boot time 

Steps so that a file system is mounted whenever a system is rebooted. 

How to Mount a File System at Boot Time.

Mount a file system by using a command 

Steps to mount a file system when a system is running. This procedure is useful when testing. 

How to Mount a File System From the Command Line.

Mount with the automounter 

Steps to access a file system on demand without using the command line. 

Mounting With the Automounter.

Mount a file system with mirrormounts 

Solaris Express, Developer Edition 1/08 release only: Steps to mount one or more file systems using mirrormounts 

Using Mirrormounts After Mounting a File System

Mount all file systems with mirrormounts 

Solaris Express, Developer Edition 1/08 release only: Steps to mount all of the file systems from one server. 

How to Mount All File Systems from a Server

Prevent large files 

Steps to prevent large files from being created on a file system. 

How to Disable Large Files on an NFS Server.

Start client-side failover 

Steps to enable the automatic switchover to a working file system if a server fails. 

How to Use Client-Side Failover.

Disable mount access for a client 

Steps to disable the ability of one client to access a remote file system. 


Note –

The procedure shows you how to use the sharemgr command. The example that follows the procedure uses the share and shareall commands to complete the same task.


How to Disable Mount Access for One Client.

Provide access to a file system through a firewall 

Steps to allow access to a file system through a firewall by using the WebNFS protocol. 

How to Mount an NFS File System Through a Firewall.

Mount a file system by using an NFS URL 

Steps to allow access to a file system by using an NFS URL. This process allows for file-system access without using the MOUNT protocol. 

How to Mount an NFS File System Using an NFS URL.

ProcedureHow to Mount a File System at Boot Time

If you want to mount file systems at boot time instead of using autofs maps, follow this procedure. This procedure must be completed on every client that should have access to remote file systems.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add an entry for the file system to /etc/vfstab.

    Entries in the /etc/vfstab file have the following syntax:

    special  fsckdev  mountp  fstype  fsckpass  mount-at-boot  mntopts

    See the vfstab(4) man page for more information.


    Caution – Caution –

    NFS servers that also have NFS client vfstab entries must always specify the bg option to avoid a system hang during reboot. For more information, see mount Options for NFS File Systems.



Example 5–4 Entry in the Client's vfstab File

You want a client machine to mount the /var/mail directory from the server wasp. You want the file system to be mounted as /var/mail on the client and you want the client to have read-write access. Add the following entry to the client's vfstab file.


wasp:/var/mail - /var/mail nfs - yes rw

ProcedureHow to Mount a File System From the Command Line

Mounting a file system from the command line is often performed to test a new mount point. This type of mount allows for temporary access to a file system that is not available through the automounter.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Mount the file system.

    Type the following command:


    # mount -F nfs -o ro bee:/export/share/local /mnt
    

    In this instance, the /export/share/local file system from the server bee is mounted on read-only /mnt on the local system. Mounting from the command line allows for temporary viewing of the file system. You can unmount the file system with umount or by rebooting the local host.


    Caution – Caution –

    Starting with the Solaris 2.6 release, all versions of the mount command do not warn about invalid options. The command silently ignores any options that cannot be interpreted. To prevent unexpected behavior, ensure that you verify all of the options that were used.



Example 5–5 Using Mirrormounts After Mounting a File System

The Solaris Express, Developer Edition 1/08 release includes the mirrormount facility. This new mounting technology can be used from any NFSv4 client accessing a second file system from an NFSv4 server. Once the first file system is mounted from the server using either the mount command or the automounter, then any file systems that are added to that mount point may be accessed. All you have to do is try to access the file system. The mirrormount occurs automatically. For more information, see How Mirrormounts Work.


Mounting With the Automounter

Task Overview for Autofs Administration includes the specific instructions for establishing and supporting mounts with the automounter. Without any changes to the generic system, clients should be able to access remote file systems through the /net mount point. To mount the /export/share/local file system from the previous example, type the following:


% cd /net/bee/export/share/local

Because the automounter allows all users to mount file systems, root access is not required. The automounter also provides for automatic unmounting of file systems, so you do not need to unmount file systems after you are finished.

See Using Mirrormounts After Mounting a File System for information about how to mount additional file systems on a client running the Solaris Express, Developer Edition 1/08 release.

ProcedureHow to Mount All File Systems from a Server

The Solaris Express, Developer Edition 1/08 release includes the mirrormount facility, which allows a client to access all available file systems shared using NFS from a server, once one mount from that server has succeeded. For more information, see How Mirrormounts Work.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Mount the root of the exported namespace of the server.

    This command mirrors the file system hierarchy from the server on the client. In this case, a /mnt/export/share/local directory structure is created.


    # mount bee:/ /mnt
    
  3. Access a file system.

    This command or any other command which accesses the file system causes the file system to be mounted.


    # cd /mnt/export/share/local
    

ProcedureHow to Disable Large Files on an NFS Server

For servers that are supporting clients that cannot handle a file over 2 GBytes, you might need to disable the ability to create large files.


Note –

Versions prior to the 2.6 release of the Solaris release cannot use large files. If the clients need to access large files, check that the clients of the NFS server are running, at minimum, the 2.6 release.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Check that no large files exist on the file system.

    For example:


    # cd /export/home1
    # find . -xdev -size +2000000 -exec ls -l {} \;
    

    If large files are on the file system, you must remove or move these files to another file system.

  3. Unmount the file system.


    # umount /export/home1
    
  4. Reset the file system state if the file system has been mounted by using largefiles.

    fsck resets the file system state if no large files exist on the file system:


    # fsck /export/home1
    
  5. Mount the file system by using nolargefiles.


    # mount -F ufs -o nolargefiles /export/home1
    

    You can mount from the command line, but to make the option more permanent, add an entry that resembles the following into /etc/vfstab:


    /dev/dsk/c0t3d0s1 /dev/rdsk/c0t3d0s1 /export/home1  ufs  2  yes  nolargefiles

ProcedureHow to Use Client-Side Failover

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. On the NFS client, mount the file system by using the ro option.

    You can mount from the command line, through the automounter, or by adding an entry to /etc/vfstab that resembles the following:


    bee,wasp:/export/share/local  -  /usr/local  nfs  -  no  ro

    This syntax has been allowed by the automounter. However, the failover was not available while file systems were mounted, only when a server was being selected.


    Note –

    Servers that are running different versions of the NFS protocol cannot be mixed by using a command line or in a vfstab entry. Mixing servers that support NFS version 2, version 3, or version 4 protocols can only be performed with autofs. In autofs, the best subset of version 2, version 3, or version 4 servers is used.


ProcedureHow to Disable Mount Access for One Client

Starting with the Solaris Express, Developer Edition 2/07 release, you can do the following:


Note –

When you use sharemgr, you do not need to use the share, shareall, and unshare commands. Also, you do not need to edit the /etc/dfs/dfstab file.


The following procedure uses the sharemgr utility. If you prefer to use the share and shareall utilities, see the example that follows this procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Disable mount access for one client.

    For example:


    # sharemgr set ro=-rose:eng my-group
    -rose:eng

    The access-list that allows mount access to all clients in the eng netgroup except the host, rose

    my-group

    The share group


Example 5–6 How to Use the share and shareall Commands to Disable Mount Access for One Client

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add an entry in /etc/dfs/dfstab.

    The first example allows mount access to all clients in the eng netgroup except the host that is named rose. The second example allows mount access to all clients in the eng.example.com DNS domain except for rose.


    share -F nfs -o ro=-rose:eng /export/share/man
    share -F nfs -o ro=-rose:.eng.example.com /export/share/man

    For additional information about access lists, see Setting Access Lists With the share Command. For a description of /etc/dfs/dfstab, see dfstab(4).

  3. Share the file system.

    The NFS server does not use changes to /etc/dfs/dfstab until the file systems are shared again or until the server is rebooted.


    # shareall

ProcedureHow to Mount an NFS File System Through a Firewall

To access file systems through a firewall, use the following procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Manually mount the file system by using a command such as the following:


    # mount -F nfs bee:/export/share/local /mnt
    

    In this example, the file system /export/share/local is mounted on the local client by using the public file handle. An NFS URL can be used instead of the standard path name. If the public file handle is not supported by the server bee, the mount operation fails.


    Note –

    This procedure requires that the file system on the NFS server be shared by using the public option. Additionally, any firewalls between the client and the server must allow TCP connections on port 2049. Starting with the Solaris 2.6 release, all file systems that are shared allow for public file handle access, so the public option is applied by default.


ProcedureHow to Mount an NFS File System Using an NFS URL

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. (Optional) If you are using NFS version 2 or version 3, manually mount the file system by using a command such as the following:


    # mount -F nfs nfs://bee:3000/export/share/local /mnt
    

    In this example, the /export/share/local file system is being mounted from the server bee by using NFS port number 3000. The port number is not required and by default the standard NFS port number of 2049 is used. You can choose to include the public option with an NFS URL. Without the public option, the MOUNT protocol is used if the public file handle is not supported by the server. The public option forces the use of the public file handle, and the mount fails if the public file handle is not supported.

  3. (Optional) If you are using NFS version 4, manually mount the file system by using a command such as the following:


    # mount -F nfs -o vers=4 nfs://bee:3000/export/share/local /mnt
    

Setting Up NFS Services

This section describes some of the tasks that are necessary to do the following:


Note –

Starting in the Solaris 10 release, NFS version 4 is the default.


Table 5–3 Task Map for NFS Services

Task 

Description 

For Instructions 

Start the NFS server 

Steps to start the NFS service if it has not been started automatically. 

How to Start the NFS Services

Stop the NFS server 

Steps to stop the NFS service. Normally the service should not need to be stopped. 

How to Stop the NFS Services

Start the automounter 

Steps to start the automounter. This procedure is required when some of the automounter maps are changed. 

How to Start the Automounter

Stop the automounter 

Steps to stop the automounter. This procedure is required when some of the automounter maps are changed. 

How to Stop the Automounter

Select a different version of NFS on the server 

Steps to select a different version of NFS on the server. If you choose not to use NFS version 4, use this procedure. 

How to Select Different Versions of NFS on a Server

Select a different version of NFS on the client 

Steps to select a different version of NFS on the client by modifying the /etc/default/nfs file. If you choose not to use NFS version 4, use this procedure.

How to Select Different Versions of NFS on a Client by Modifying the /etc/default/nfs File

 

Alternate steps to select a different version of NFS on the client by using the command line. If you choose not to use NFS version 4, use this alternate procedure. 

How to Use the Command Line to Select Different Versions of NFS on a Client

ProcedureHow to Start the NFS Services

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Enable the NFS service on the server.

    Type the following command.


    # svcadm enable network/nfs/server
    

    This command enables the NFS service.


    Note –

    Starting with the Solaris 9 release, the NFS server starts automatically when you boot the system. Additionally, any time after the system has been booted, the NFS service daemons can be automatically enabled by sharing the NFS file system. See How to Set Up Automatic File-System Sharing.


ProcedureHow to Stop the NFS Services

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Disable the NFS service on the server.

    Type the following command.


    # svcadm disable network/nfs/server
    

ProcedureHow to Start the Automounter

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Enable the autofs daemon.

    Type the following command:


    # svcadm enable system/filesystem/autofs
    

ProcedureHow to Stop the Automounter

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Disable the autofs daemon.

    Type the following command:


    # svcadm disable system/filesystem/autofs
    

ProcedureHow to Select Different Versions of NFS on a Server

If you choose not to use NFS version 4, use this procedure.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Edit the /etc/default/nfs file.

    For example, if you want the server to provide only version 3, set the values for both NFS_SERVER_VERSMAX and NFS_SERVER_VERSMIN to 3. For a list of keywords and their values, refer to Keywords for the /etc/default/nfs File.


    NFS_SERVER_VERSMAX=value
    NFS_SERVER_VERSMIN=value
    
    value

    Provide the version number.


    Note –

    By default, these lines are commented. Remember to remove the pound (#) sign, also.


  3. (Optional) If you want to disable server delegation, include this line in the /etc/default/nfs file.


    NFS_SERVER_DELEGATION=off
    

    Note –

    In NFS version 4, server delegation is enabled by default. For more information, see Delegation in NFS Version 4.


  4. (Optional) If you want to set a common domain for clients and servers, include this line in the /etc/default/nfs file.


    NFSMAPID_DOMAIN=my.comany.com
    
    my.comany.com

    Provide the common domain

    For more information, refer to nfsmapid Daemon.

  5. Check if the NFS service is running on the server.

    Type the following command:


    # svcs network/nfs/server
    

    This command reports whether the NFS server service is online or disabled.

  6. (Optional) If necessary, disable the NFS service.

    If you discovered from the previous step that the NFS service is online, type the following command to disable the service.


    # svcadm disable network/nfs/server
    

    Note –

    If you need to configure your NFS service, refer to How to Set Up Automatic File-System Sharing.


  7. Enable the NFS service.

    Type the following command to enable the service.


    # svcadm enable network/nfs/server
    
See Also

Version Negotiation in NFS

ProcedureHow to Select Different Versions of NFS on a Client by Modifying the /etc/default/nfs File

The following procedure shows you how to control which version of NFS is used on the client by modifying the /etc/default/nfs file. If you prefer to use the command line, refer to How to Use the Command Line to Select Different Versions of NFS on a Client.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Edit the /etc/default/nfs file.

    For example, if you want only version 3 on the client, set the values for both NFS_CLIENT_VERSMAX and NFS_CLIENT_VERSMIN to 3. For a list of keywords and their values, refer to Keywords for the /etc/default/nfs File.


    NFS_CLIENT_VERSMAX=value
    NFS_CLIENT_VERSMIN=value
    
    value

    Provide the version number.


    Note –

    By default, these lines are commented. Remember to remove the pound (#) sign, also.


  3. Mount NFS on the client.

    Type the following command:


    # mount server-name:/share-point /local-dir
    
    server-name

    Provide the name of the server.

    /share-point

    Provide the path of the remote directory to be shared.

    /local-dir

    Provide the path of the local mount point.

See Also

Version Negotiation in NFS

ProcedureHow to Use the Command Line to Select Different Versions of NFS on a Client

The following procedure shows you how to use the command line to control which version of NFS is used on a client for a particular mount. If you prefer to modify the /etc/default/nfs file, see How to Select Different Versions of NFS on a Client by Modifying the /etc/default/nfs File.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Mount the desired version of NFS on the client.

    Type the following command:


    # mount -o vers=value server-name:/share-point /local-dir
    
    value

    Provide the version number.

    server-name

    Provide the name of the server.

    /share-point

    Provide the path of the remote directory to be shared.

    /local-dir

    Provide the path of the local mount point.


    Note –

    This command uses the NFS protocol to mount the remote directory and overrides the client settings in the /etc/default/nfs file.


See Also

Version Negotiation in NFS

Administering the Secure NFS System

To use the Secure NFS system, all the computers that you are responsible for must have a domain name. Typically, a domain is an administrative entity of several computers that is part of a larger network. If you are running a name service, you should also establish the name service for the domain. See System Administration Guide: Naming and Directory Services (NIS+).

Kerberos V5 authentication is supported by the NFS service. Chapter 21, Introduction to the Kerberos Service, in System Administration Guide: Security Services discusses the Kerberos service.

You can also configure the Secure NFS environment to use Diffie-Hellman authentication. Chapter 16, Using Authentication Services (Tasks), in System Administration Guide: Security Services discusses this authentication service.

The following procedure shows you how to use the sharemgr utility to set up a secure NFS environment with DH authentication. The example that follows the procedure shows you how to use the share command to complete the same task.

ProcedureHow to Set Up a Secure NFS Environment With DH Authentication

Starting with the Solaris Express, Developer Edition 2/07 release, you can do the following:


Note –

When you use sharemgr, you do not need to use the share, shareall, and unshare commands. Also, you do not need to edit the /etc/dfs/dfstab file.


The following procedure uses the sharemgr utility. If you prefer to use the share utility, see the example that follows this procedure.

  1. Assign your domain a domain name, and make the domain name known to each computer in the domain.

    See the System Administration Guide: Naming and Directory Services (NIS+) if you are using NIS+ as your name service.

  2. Establish public keys and secret keys for your clients' users by using the newkey or nisaddcred command. Have each user establish his or her own secure RPC password by using the chkey command.


    Note –

    For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.


    When public keys and secret keys have been generated, the public keys and encrypted secret keys are stored in the publickey database.

  3. Verify that the name service is responding.

    For example:

    • If you are running NIS+, type the following:


      # nisping -u
      Last updates for directory eng.acme.com. :
      Master server is eng-master.acme.com.
              Last update occurred at Mon Jun  5 11:16:10 2006
      
      Replica server is eng1-replica-replica-58.acme.com.
              Last Update seen was Mon Jun  5 11:16:10 2006
    • If you are running NIS, verify that the ypbind daemon is running.

  4. Verify that the keyserv daemon of the key server is running.

    Type the following command.


    # ps -ef | grep keyserv
    root    100      1  16    Apr 11 ?        0:00 /usr/sbin/keyserv
    root   2215   2211   5  09:57:28 pts/0    0:00 grep keyserv

    If the daemon is not running, start the key server by typing the following:


    # /usr/sbin/keyserv
    
  5. Decrypt and store the secret key.

    Usually, the login password is identical to the network password. In this situation, keylogin is not required. If the passwords are different, the users have to log in, and then run keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.


    Note –

    You need to run keylogin -r if the root secret key changes or if /etc/.rootkey is lost.


  6. Use the sharemgr utility to set the security mode for the file system to be shared.

    For example:


    # sharemgr set -P nfs -S dh MyShareGroup
    
    -P

    Use this option to specify a file-system type, such as nfs.

    -S

    Use this option to specify a security mode, such as sys, dh, or krb5. For more information about security modes, see the nfssec(5) man page.

    MyShareGroup

    Use the name of the share group that you created. For more information, see the sharemgr(1M) man page or sharemgr Command


    Note –

    You do not need to edit the etc/dfs/dfstab file.


  7. Update the automounter maps for the file system.

    Edit the auto_master data to include sec=dh as a mount option in the appropriate entries for Diffie-Hellman authentication:


    /home	auto_home	-nosuid,sec=dh

    Note –

    Releases through Solaris 2.5 have a limitation. If a client does not securely mount a shared file system that is secure, users have access as nobody rather than as themselves. For subsequent releases that use version 2, the NFS server refuses access if the security modes do not match, unless sec=none is included on the share command line. With version 3, the mode is inherited from the NFS server, so clients do not need to specify sec=dh. The users have access to the files as themselves.


    When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you do not establish new keys or change the keys for root. If you do delete /etc/.rootkey, you can always type the following:


    # keylogin -r
    

Example 5–7 How to Use the share Command to Set Up a Secure NFS Environment With DH Authentication

  1. Assign your domain a domain name, and make the domain name known to each computer in the domain.

    See the System Administration Guide: Naming and Directory Services (NIS+) if you are using NIS+ as your name service.

  2. Establish public keys and secret keys for your clients' users by using the newkey or nisaddcred command. Have each user establish his or her own secure RPC password by using the chkey command.


    Note –

    For information about these commands, see the newkey(1M), the nisaddcred(1M), and the chkey(1) man pages.


    When public keys and secret keys have been generated, the public keys and encrypted secret keys are stored in the publickey database.

  3. Verify that the name service is responding.

    For example:

    • If you are running NIS+, type the following:


      # nisping -u
      Last updates for directory eng.acme.com. :
      Master server is eng-master.acme.com.
              Last update occurred at Mon Jun  5 11:16:10 2006
      
      Replica server is eng1-replica-replica-58.acme.com.
              Last Update seen was Mon Jun  5 11:16:10 2006
    • If you are running NIS, verify that the ypbind daemon is running.

  4. Verify that the keyserv daemon of the key server is running.

    Type the following command.


    # ps -ef | grep keyserv
    root    100      1  16    Apr 11 ?        0:00 /usr/sbin/keyserv
    root   2215   2211   5  09:57:28 pts/0    0:00 grep keyserv

    If the daemon is not running, start the key server by typing the following:


    # /usr/sbin/keyserv
    
  5. Decrypt and store the secret key.

    Usually, the login password is identical to the network password. In this situation, keylogin is not required. If the passwords are different, the users have to log in, and then run keylogin. You still need to use the keylogin -r command as root to store the decrypted secret key in /etc/.rootkey.


    Note –

    You need to run keylogin -r if the root secret key changes or if /etc/.rootkey is lost.


  6. Update mount options for the file system.

    For Diffie-Hellman authentication, edit the /etc/dfs/dfstab file and add the sec=dh option to the appropriate entries.


    share -F nfs -o sec=dh /export/home
    

    See the dfstab(4) man page for a description of /etc/dfs/dfstab.

  7. Update the automounter maps for the file system.

    Edit the auto_master data to include sec=dh as a mount option in the appropriate entries for Diffie-Hellman authentication:


    /home	auto_home	-nosuid,sec=dh

    Note –

    Releases through Solaris 2.5 have a limitation. If a client does not securely mount a shared file system that is secure, users have access as nobody rather than as themselves. For subsequent releases that use version 2, the NFS server refuses access if the security modes do not match, unless -sec=none is included on the share command line. With version 3, the mode is inherited from the NFS server, so clients do not need to specify sec=dh. The users have access to the files as themselves.


    When you reinstall, move, or upgrade a computer, remember to save /etc/.rootkey if you do not establish new keys or change the keys for root. If you do delete /etc/.rootkey, you can always type the following:


    # keylogin -r
    

WebNFS Administration Tasks

This section provides instructions for administering the WebNFS system. Related tasks follow.

Table 5–4 Task Map for WebNFS Administration

Task 

Description 

For Instructions 

Plan for WebNFS 

Issues to consider before enabling the WebNFS service. 

Planning for WebNFS Access

Enable WebNFS 

Steps to enable mounting of an NFS file system by using the WebNFS protocol. 

How to Enable WebNFS Access

Enable WebNFS through a firewall 

Steps to allow access to files through a firewall by using the WebNFS protocol. 

How to Enable WebNFS Access Through a Firewall

Browse by using an NFS URL 

Instructions for using an NFS URL within a web browser. 

How to Browse Using an NFS URL

Use a public file handle with autofs 

Steps to force use of the public file handle when mounting a file system with the automounter. 

How to Use a Public File Handle With Autofs

Use an NFS URL with autofs 

Steps to add an NFS URL to the automounter maps. 

How to Use NFS URLs With Autofs

Provide access to a file system through a firewall 

Steps to allow access to a file system through a firewall by using the WebNFS protocol. 

How to Mount an NFS File System Through a Firewall

Mount a file system by using an NFS URL 

Steps to allow access to a file system by using an NFS URL. This process allows for file-system access without using the MOUNT protocol. 

How to Mount an NFS File System Using an NFS URL

Planning for WebNFS Access

To use WebNFS, you first need an application that is capable of running and loading an NFS URL (for example, nfs://server/path). The next step is to choose the file system that can be exported for WebNFS access. If the application is web browsing, often the document root for the web server is used. You need to consider several factors when choosing a file system to export for WebNFS access.

  1. Each server has one public file handle that by default is associated with the server's root file system. The path in an NFS URL is evaluated relative to the directory with which the public file handle is associated. If the path leads to a file or directory within an exported file system, the server provides access. You can use the public option of the share command to associate the public file handle with a specific exported directory. Using this option allows URLs to be relative to the shared file system rather than to the server's root file system. The root file system does not allow web access unless the root file system is shared.

  2. The WebNFS environment enables users who already have mount privileges to access files through a browser. This capability is enabled regardless of whether the file system is exported by using the public option. Because users already have access to these files through the NFS setup, this access should not create any additional security risk. You only need to share a file system by using the public option if users who cannot mount the file system need to use WebNFS access.

  3. File systems that are already open to the public make good candidates for using the public option. Some examples are the top directory in an ftp archive or the main URL directory for a web site.

  4. You can use the index option with the share command to force the loading of an HTML file. Otherwise, you can list the directory when an NFS URL is accessed.

    After a file system is chosen, review the files and set access permissions to restrict viewing of files or directories, as needed. Establish the permissions, as appropriate, for any NFS file system that is being shared. For many sites, 755 permissions for directories and 644 permissions for files provide the correct level of access.

    You need to consider additional factors if both NFS and HTTP URLs are to be used to access one web site. These factors are described in WebNFS Limitations With Web Browser Use.

How to Browse Using an NFS URL

Browsers that are capable of supporting the WebNFS service should provide access to an NFS URL that resembles the following:


nfs://server<:port>/path
server

Name of the file server

port

Port number to use (2049, default value)

path

Path to file, which can be relative to the public file handle or to the root file system


Note –

In most browsers, the URL service type (for example, nfs or http) is remembered from one transaction to the next. The exception occurs when a URL that includes a different service type is loaded. After you use an NFS URL, a reference to an HTTP URL might be loaded. If such a reference is loaded, subsequent pages are loaded by using the HTTP protocol instead of the NFS protocol.


How to Enable WebNFS Access Through a Firewall

You can enable WebNFS access for clients that are not part of the local subnet by configuring the firewall to allow a TCP connection on port 2049. Just allowing access for httpd does not allow NFS URLs to be used.

Task Overview for Autofs Administration

This section describes some of the most common tasks you might encounter in your own environment. Recommended procedures are included for each scenario to help you configure autofs to best meet your clients' needs. To perform the tasks that are discussed in this section, use the Solaris Management Console tools or see the System Administration Guide: Naming and Directory Services (NIS+).


Note –

Starting in the Solaris 10 release, you can also use the /etc/default/autofs file to configure your autofs environment. For task information, refer to Using the /etc/default/autofs File to Configure Your autofs Environment.


Task Map for Autofs Administration

The following table provides a description and a pointer to many of the tasks that are related to autofs.

Table 5–5 Task Map for Autofs Administration

Task 

Description 

For Instructions 

Start autofs 

Start the automount service without having to reboot the system 

How to Start the Automounter

Stop autofs 

Stop the automount service without disabling other network services 

How to Stop the Automounter

Configure your autofs environment by using the /etc/default/autofs file

Assign values to keywords in the /etc/default/autofs file

Using the /etc/default/autofs File to Configure Your autofs Environment

Access file systems by using autofs 

Access file systems by using the automount service 

Mounting With the Automounter

Modify the autofs maps 

Steps to modify the master map, which should be used to list other maps 

How to Modify the Master Map

 

Steps to modify an indirect map, which should be used for most maps 

How to Modify Indirect Maps

 

Steps to modify a direct map, which should be used when a direct association between a mount point on a client and a server is required 

How to Modify Direct Maps

Modify the autofs maps to access non-NFS file systems 

Steps to set up an autofs map with an entry for a CD-ROM application 

How to Access CD-ROM Applications With Autofs

 

Steps to set up an autofs map with an entry for a PC-DOS diskette 

How to Access PC-DOS Data Diskettes With Autofs

 

Steps to use autofs to access a CacheFS file system 

How to Access NFS File Systems by Using CacheFS

Using /home

Example of how to set up a common /home map

Setting Up a Common View of /home

 

Steps to set up a /home map that refers to multiple file systems

How to Set Up /home With Multiple Home Directory File Systems

Using a new autofs mount point 

Steps to set up a project-related autofs map 

How to Consolidate Project-Related Files Under /ws

 

Steps to set up an autofs map that supports different client architectures 

How to Set Up Different Architectures to Access a Shared Namespace

 

Steps to set up an autofs map that supports different operating systems 

How to Support Incompatible Client Operating System Versions

Replicate file systems with autofs 

Provide access to file systems that fail over 

How to Replicate Shared Files Across Several Servers

Using security restrictions with autofs 

Provide access to file systems while restricting remote root access to the files

How to Apply Autofs Security Restrictions

Using a public file handle with autofs 

Force use of the public file handle when mounting a file system 

How to Use a Public File Handle With Autofs

Using an NFS URL with autofs 

Add an NFS URL so that the automounter can use it 

How to Use NFS URLs With Autofs

Disable autofs browsability 

Steps to disable browsability so that autofs mount points are not automatically populated on a single client 

How to Completely Disable Autofs Browsability on a Single NFS Client

 

Steps to disable browsability so that autofs mount points are not automatically populated on all clients 

How to Disable Autofs Browsability for All Clients

 

Steps to disable browsability so that a specific autofs mount point is not automatically populated on a client 

How to Disable Autofs Browsability on a Selected File System

Using the /etc/default/autofs File to Configure Your autofs Environment

Starting in the Solaris 10 release, you can use the /etc/default/autofs file to configure your autofs environment. Specifically, this file provides an additional way to configure your autofs commands and autofs daemons. The same specifications you would make on the command line can be made in this configuration file. You can make your specifications by providing values to keywords. For more information, refer to /etc/default/autofs File.

The following procedure shows you how to use the /etc/default/autofs file.

ProcedureHow to Use the /etc/default/autofs File

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Add or modify an entry in the /etc/default/autofs file.

    For example, if you want to turn off browsing for all autofs mount points, you could add the following line.


    AUTOMOUNTD_NOBROWSE=ON
    

    This keyword is the equivalent of the -n argument for automountd. For a list of keywords, refer to /etc/default/autofs File.

  3. Restart the autofs daemon.

    Type the following command:


    # svcadm restart system/filesystem/autofs
    

Administrative Tasks Involving Maps

The following tables describe several of the factors you need to be aware of when administering autofs maps. Your choice of map and name service affect the mechanism that you need to use to make changes to the autofs maps.

The following table describes the types of maps and their uses.

Table 5–6 Types of autofs Maps and Their Uses

Type of Map 

Use 

Master

Associates a directory with a map 

Direct

Directs autofs to specific file systems 

Indirect

Directs autofs to reference-oriented file systems 

The following table describes how to make changes to your autofs environment that are based on your name service.

Table 5–7 Map Maintenance

Name Service 

Method 

Local files 

Text editor

NIS 

make files

NIS+ 

nistbladm

The next table tells you when to run the automount command, depending on the modification you have made to the type of map. For example, if you have made an addition or a deletion to a direct map, you need to run the automount command on the local system. By running the command, you make the change effective. However, if you have modified an existing entry, you do not need to run the automount command for the change to become effective.

Table 5–8 When to Run the automount Command

Type of Map 

Restart automount?

 

 

Addition or Deletion 

Modification 

auto_master

Y

Y

direct

Y

N

indirect

N

N

Modifying the Maps

The following procedures require that you use NIS+ as your name service.

ProcedureHow to Modify the Master Map

  1. Log in as a user who has permissions to change the maps.

  2. Using the nistbladm command, make your changes to the master map.

    See the System Administration Guide: Naming and Directory Services (NIS+).

  3. For each client, become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  4. For each client, run the automount command to ensure that your changes become effective.

  5. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers. Note that the automount command gathers information from the master map whenever it is run.

ProcedureHow to Modify Indirect Maps

  1. Log in as a user who has permissions to change the maps.

  2. Using the nistbladm command, make your changes to the indirect map.

    See the System Administration Guide: Naming and Directory Services (NIS+). Note that the change becomes effective the next time that the map is used, which is the next time a mount is performed.

ProcedureHow to Modify Direct Maps

  1. Log in as a user who has permissions to change the maps.

  2. Using the nistbladm command, add or delete your changes to the direct map.

    See the System Administration Guide: Naming and Directory Services (NIS+).

  3. If you added or deleted a mount-point entry in the previous step, run the automount command.

  4. Notify your users of the changes.

    Notification is required so that the users can also run the automount command as superuser on their own computers.


    Note –

    If you only modify or change the contents of an existing direct map entry, you do not need to run the automount command.


    For example, suppose you modify the auto_direct map so that the /usr/src directory is now mounted from a different server. If /usr/src is not mounted at this time, the new entry becomes effective immediately when you try to access /usr/src. If /usr/src is mounted now, you can wait until the auto-unmounting occurs, then access the file.


    Note –

    Use indirect maps whenever possible. Indirect maps are easier to construct and less demanding on the computers' file systems. Also, indirect maps do not occupy as much space in the mount table as direct maps.


Avoiding Mount-Point Conflicts

If you have a local disk partition that is mounted on /src and you plan to use the autofs service to mount other source directories, you might encounter a problem. If you specify the mount point /src, the NFS service hides the local partition whenever you try to reach it.

You need to mount the partition in some other location, for example, on /export/src. You then need an entry in /etc/vfstab such as the following:


/dev/dsk/d0t3d0s5 /dev/rdsk/c0t3d0s5 /export/src ufs 3 yes - 

You also need this entry in auto_src:


terra		terra:/export/src 

terra is the name of the computer.

Accessing Non-NFS File Systems

Autofs can also mount files other than NFS files. Autofs mounts files on removable media, such as diskettes or CD-ROM. Normally, you would mount files on removable media by using the Volume Manager. The following examples show how this mounting could be accomplished through autofs. The Volume Manager and autofs do not work together, so these entries would not be used without first deactivating the Volume Manager.

Instead of mounting a file system from a server, you put the media in the drive and reference the file system from the map. If you plan to access non-NFS file systems and you are using autofs, see the following procedures.

ProcedureHow to Access CD-ROM Applications With Autofs


Note –

Use this procedure if you are not using Volume Manager.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Update the autofs map.

    Add an entry for the CD-ROM file system, which should resemble the following:


    hsfs     -fstype=hsfs,ro     :/dev/sr0

    The CD-ROM device that you intend to mount must appear as a name that follows the colon.

ProcedureHow to Access PC-DOS Data Diskettes With Autofs


Note –

Use this procedure if you are not using Volume Manager.


  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Update the autofs map.

    Add an entry for the diskette file system such as the following:


     pcfs     -fstype=pcfs     :/dev/diskette

Accessing NFS File Systems Using CacheFS

The cache file system (CacheFS) is a generic nonvolatile caching mechanism. CacheFS improves the performance of certain file systems by utilizing a small, fast local disk. For example, you can improve the performance of the NFS environment by using CacheFS.

CacheFS works differently with different versions of NFS. For example, if both the client and the back file system are running NFS version 2 or version 3, the files are cached in the front file system for access by the client. However, if both the client and the server are running NFS version 4, the functionality is as follows. When the client makes the initial request to access a file from a CacheFS file system, the request bypasses the front (or cached) file system and goes directly to the back file system. With NFS version 4, files are no longer cached in a front file system. All file access is provided by the back file system. Also, since no files are being cached in the front file system, CacheFS-specific mount options, which are meant to affect the front file system, are ignored. CacheFS-specific mount options do not apply to the back file system.


Note –

The first time you configure your system for NFS version 4, a warning appears on the console to indicate that caching is no longer performed.


ProcedureHow to Access NFS File Systems by Using CacheFS

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Run the cfsadmin command to create a cache directory on the local disk.


    # cfsadmin -c /var/cache
    
  3. Add the cachefs entry to the appropriate automounter map.

    For example, adding this entry to the master map caches all home directories:


    /home auto_home -fstype=cachefs,cachedir=/var/cache,backfstype=nfs

    Adding this entry to the auto_home map only caches the home directory for the user who is named rich:


    rich -fstype=cachefs,cachedir=/var/cache,backfstype=nfs dragon:/export/home1/rich

    Note –

    Options that are included in maps that are searched later override options which are set in maps that are searched earlier. The last options that are found are the ones that are used. In the previous example, an additional entry to the auto_home map only needs to include the options in the master maps if some options required changes.


Customizing the Automounter

You can set up the automounter maps in several ways. The following tasks give details about how to customize the automounter maps to provide an easy-to-use directory structure.

Setting Up a Common View of /home

The ideal is for all network users to be able to locate their own or anyone's home directory under /home. This view should be common across all computers, whether client or server.

Every Solaris installation comes with a master map: /etc/auto_master.


# Master map for autofs
#
+auto_master
/net     -hosts     -nosuid,nobrowse
/home    auto_home  -nobrowse

A map for auto_home is also installed under /etc.


# Home directory map for autofs
#
+auto_home

Except for a reference to an external auto_home map, this map is empty. If the directories under /home are to be common to all computers, do not modify this /etc/auto_home map. All home directory entries should appear in the name service files, either NIS or NIS+.


Note –

Users should not be permitted to run setuid executables from their home directories. Without this restriction, any user could have superuser privileges on any computer.


ProcedureHow to Set Up /home With Multiple Home Directory File Systems

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Install home directory partitions under /export/home.

    If the system has several partitions, install the partitions under separate directories, for example, /export/home1 and /export/home2.

  3. Use the Solaris Management Console tools to create and maintain the auto_home map.

    Whenever you create a new user account, type the location of the user's home directory in the auto_home map. Map entries can be simple, for example:


    rusty        dragon:/export/home1/&
    gwenda       dragon:/export/home1/&
    charles      sundog:/export/home2/&
    rich         dragon:/export/home3/&

    Notice the use of the & (ampersand) to substitute the map key. The ampersand is an abbreviation for the second occurrence of rusty in the following example.


    rusty     	dragon:/export/home1/rusty

    With the auto_home map in place, users can refer to any home directory (including their own) with the path /home/user. user is their login name and the key in the map. This common view of all home directories is valuable when logging in to another user's computer. Autofs mounts your home directory for you. Similarly, if you run a remote windowing system client on another computer, the client program has the same view of the /home directory.

    This common view also extends to the server. Using the previous example, if rusty logs in to the server dragon, autofs there provides direct access to the local disk by loopback-mounting /export/home1/rusty onto /home/rusty.

    Users do not need to be aware of the real location of their home directories. If rusty needs more disk space and needs to have his home directory relocated to another server, a simple change is sufficient. You need only change rusty's entry in the auto_home map to reflect the new location. Other users can continue to use the /home/rusty path.

ProcedureHow to Consolidate Project-Related Files Under /ws

Assume that you are the administrator of a large software development project. You plan to make all project-related files available under a directory that is called /ws. This directory is to be common across all workstations at the site.

  1. Add an entry for the /ws directory to the site auto_master map, either NIS or NIS+.


    /ws     auto_ws     -nosuid 

    The auto_ws map determines the contents of the /ws directory.

  2. Add the -nosuid option as a precaution.

    This option prevents users from running setuid programs that might exist in any workspaces.

  3. Add entries to the auto_ws map.

    The auto_ws map is organized so that each entry describes a subproject. Your first attempt yields a map that resembles the following:


    compiler   alpha:/export/ws/&
    windows    alpha:/export/ws/&
    files      bravo:/export/ws/&
    drivers    alpha:/export/ws/&
    man        bravo:/export/ws/&
    tools      delta:/export/ws/&

    The ampersand (&) at the end of each entry is an abbreviation for the entry key. For instance, the first entry is equivalent to the following:


    compiler		alpha:/export/ws/compiler 

    This first attempt provides a map that appears simple, but the map is inadequate. The project organizer decides that the documentation in the man entry should be provided as a subdirectory under each subproject. Also, each subproject requires subdirectories to describe several versions of the software. You must assign each of these subdirectories to an entire disk partition on the server.

    Modify the entries in the map as follows:


    compiler \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /man        bravo:/export/ws/&/man
    windows \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    files \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /vers2.0    bravo:/export/ws/&/vers2.0 \
        /vers3.0    bravo:/export/ws/&/vers3.0 \
        /man        bravo:/export/ws/&/man
    drivers \
        /vers1.0    alpha:/export/ws/&/vers1.0 \
        /man        bravo:/export/ws/&/man
    tools \
        /           delta:/export/ws/&

    Although the map now appears to be much larger, the map still contains only the five entries. Each entry is larger because each entry contains multiple mounts. For instance, a reference to /ws/compiler requires three mounts for the vers1.0, vers2.0, and man directories. The backslash at the end of each line informs autofs that the entry is continued onto the next line. Effectively, the entry is one long line, though line breaks and some indenting have been used to make the entry more readable. The tools directory contains software development tools for all subprojects, so this directory is not subject to the same subdirectory structure. The tools directory continues to be a single mount.

    This arrangement provides the administrator with much flexibility. Software projects typically consume substantial amounts of disk space. Through the life of the project, you might be required to relocate and expand various disk partitions. If these changes are reflected in the auto_ws map, the users do not need to be notified, as the directory hierarchy under /ws is not changed.

    Because the servers alpha and bravo view the same autofs map, any users who log in to these computers can find the /ws namespace as expected. These users are provided with direct access to local files through loopback mounts instead of NFS mounts.

ProcedureHow to Set Up Different Architectures to Access a Shared Namespace

You need to assemble a shared namespace for local executables, and applications, such as spreadsheet applications and word-processing packages. The clients of this namespace use several different workstation architectures that require different executable formats. Also, some workstations are running different releases of the operating system.

  1. Create the auto_local map with the nistbladm command.

    See the System Administration Guide: Naming and Directory Services (NIS+).

  2. Choose a single, site-specific name for the shared namespace. This name makes the files and directories that belong to this space easily identifiable.

    For example, if you choose /usr/local as the name, the path /usr/local/bin is obviously a part of this namespace.

  3. For ease of user community recognition, create an autofs indirect map. Mount this map at /usr/local. Set up the following entry in the NIS+ (or NIS) auto_master map:


    /usr/local     auto_local     -ro

    Notice that the -ro mount option implies that clients cannot write to any files or directories.

  4. Export the appropriate directory on the server.

  5. Include a bin entry in the auto_local map.

    Your directory structure resembles the following:


     bin     aa:/export/local/bin 
  6. (Optional) To serve clients of different architectures, change the entry by adding the autofs CPU variable.


    bin     aa:/export/local/bin/$CPU 
    • For SPARC clients – Place executables in /export/local/bin/sparc.

    • For x86 clients – Place executables in /export/local/bin/i386.

ProcedureHow to Support Incompatible Client Operating System Versions

  1. Combine the architecture type with a variable that determines the operating system type of the client.

    You can combine the autofs OSREL variable with the CPU variable to form a name that determines both CPU type and OS release.

  2. Create the following map entry.


    bin     aa:/export/local/bin/$CPU$OSREL

    For clients that are running version 5.6 of the operating system, export the following file systems:

    • For SPARC clients – Export /export/local/bin/sparc5.6.

    • For x86 clients – Place executables in /export/local/bin/i3865.6.

ProcedureHow to Replicate Shared Files Across Several Servers

The best way to share replicated file systems that are read-only is to use failover. See Client-Side Failover for a discussion of failover.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Modify the entry in the autofs maps.

    Create the list of all replica servers as a comma-separated list, such as the following:


    bin     aa,bb,cc,dd:/export/local/bin/$CPU
    

    Autofs chooses the nearest server. If a server has several network interfaces, list each interface. Autofs chooses the nearest interface to the client, avoiding unnecessary routing of NFS traffic.

ProcedureHow to Apply Autofs Security Restrictions

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Create the following entry in the name service auto_master file, either NIS or NIS+:


    /home     auto_home     -nosuid
    

    The nosuid option prevents users from creating files with the setuid or setgid bit set.

    This entry overrides the entry for /home in a generic local /etc/auto_master file. See the previous example. The override happens because the +auto_master reference to the external name service map occurs before the /home entry in the file. If the entries in the auto_home map include mount options, the nosuid option is overwritten. Therefore, either no options should be used in the auto_home map or the nosuid option must be included with each entry.


    Note –

    Do not mount the home directory disk partitions on or under /home on the server.


ProcedureHow to Use a Public File Handle With Autofs

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Create an entry in the autofs map such as the following:


    /usr/local     -ro,public    bee:/export/share/local

    The public option forces the public handle to be used. If the NFS server does not support a public file handle, the mount fails.

ProcedureHow to Use NFS URLs With Autofs

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Create an autofs entry such as the following:


    /usr/local     -ro    nfs://bee/export/share/local

    The service tries to use the public file handle on the NFS server. However, if the server does not support a public file handle, the MOUNT protocol is used.

Disabling Autofs Browsability

Starting with the Solaris 2.6 release, the default version of /etc/auto_master that is installed has the -nobrowse option added to the entries for /home and /net. In addition, the upgrade procedure adds the -nobrowse option to the /home and /net entries in /etc/auto_master if these entries have not been modified. However, you might have to make these changes manually or to turn off browsability for site-specific autofs mount points after the installation.

You can turn off the browsability feature in several ways. Disable the feature by using a command-line option to the automountd daemon, which completely disables autofs browsability for the client. Or disable browsability for each map entry on all clients by using the autofs maps in either an NIS or NIS+ namespace. You can also disable the feature for each map entry on each client, using local autofs maps if no network-wide namespace is being used.

ProcedureHow to Completely Disable Autofs Browsability on a Single NFS Client

  1. Become superuser or assume an equivalent role on the NFS client.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Edit the /etc/default/autofs file to include the following keyword and value.


    AUTOMOUNTD_NOBROWSE=TRUE
  3. Restart the autofs service.


    # svcadm restart system/filesystem/autofs
    

ProcedureHow to Disable Autofs Browsability for All Clients

To disable browsability for all clients, you must employ a name service such as NIS or NIS+. Otherwise, you need to manually edit the automounter maps on each client. In this example, the browsability of the /home directory is disabled. You must follow this procedure for each indirect autofs node that needs to be disabled.

  1. Add the -nobrowse option to the /home entry in the name service auto_master file.


    /home     auto_home     -nobrowse
    
  2. Run the automount command on all clients.

    The new behavior becomes effective after you run the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

ProcedureHow to Disable Autofs Browsability on a Selected File System

In this example, browsability of the /net directory is disabled. You can use the same procedure for /home or any other autofs mount points.

  1. Check the automount entry in /etc/nsswitch.conf.

    For local file entries to have precedence, the entry in the name service switch file should list files before the name service. For example:


    automount:  files nisplus

    This entry shows the default configuration in a standard Solaris installation.

  2. Check the position of the +auto_master entry in /etc/auto_master.

    For additions to the local files to have precedence over the entries in the namespace, the +auto_master entry must be moved to follow /net:


    # Master map for automounter
    #
    /net    -hosts     -nosuid
    /home   auto_home
    /xfn    -xfn
    +auto_master
    

    A standard configuration places the +auto_master entry at the top of the file. This placement prevents any local changes from being used.

  3. Add the nobrowse option to the /net entry in the /etc/auto_master file.


    /net     -hosts     -nosuid,nobrowse
    
  4. On all clients, run the automount command.

    The new behavior becomes effective after running the automount command on the client systems or after a reboot.


    # /usr/sbin/automount
    

Strategies for NFS Troubleshooting

When tracking an NFS problem, remember the main points of possible failure: the server, the client, and the network. The strategy that is outlined in this section tries to isolate each individual component to find the one that is not working. In all situations, the mountd and nfsd daemons must be running on the server for remote mounts to succeed.

The -intr option is set by default for all mounts. If a program hangs with a server not responding message, you can kill the program with the keyboard interrupt Control-c.

When the network or server has problems, programs that access hard-mounted remote files fail differently than those programs that access soft-mounted remote files. Hard-mounted remote file systems cause the client's kernel to retry the requests until the server responds again. Soft-mounted remote file systems cause the client's system calls to return an error after trying for awhile. Because these errors can result in unexpected application errors and data corruption, avoid soft mounting.

When a file system is hard mounted, a program that tries to access the file system hangs if the server fails to respond. In this situation, the NFS system displays the following message on the console:


NFS server hostname not responding still trying

When the server finally responds, the following message appears on the console:


NFS server hostname ok

A program that accesses a soft-mounted file system whose server is not responding generates the following message:


NFS operation failed for server hostname: error # (error-message)

Note –

Because of possible errors, do not soft-mount file systems with read-write data or file systems from which executables are run. Writable data could be corrupted if the application ignores the errors. Mounted executables might not load properly and can fail.


NFS Troubleshooting Procedures

To determine where the NFS service has failed, you need to follow several procedures to isolate the failure. Check for the following items:

In the process of checking these items, you might notice that other portions of the network are not functioning. For example, the name service or the physical network hardware might not be functioning. The System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) contains debugging procedures for several name services. Also, during the process you might see that the problem is not at the client end. An example is if you get at least one trouble call from every subnet in your work area. In this situation, you should assume that the problem is the server or the network hardware near the server. So, you should start the debugging process at the server, not at the client.

ProcedureHow to Check Connectivity on an NFS Client

  1. Check that the NFS server is reachable from the client. On the client, type the following command.


    % /usr/sbin/ping bee
    bee is alive

    If the command reports that the server is alive, remotely check the NFS server. See How to Check the NFS Server Remotely.

  2. If the server is not reachable from the client, ensure that the local name service is running.

    For NIS+ clients, type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  3. If the name service is running, ensure that the client has received the correct host information by typing the following:


    % /usr/bin/getent hosts bee
    129.144.83.117	bee.eng.acme.com
  4. If the host information is correct, but the server is not reachable from the client, run the ping command from another client.

    If the command run from a second client fails, see How to Verify the NFS Service on the Server.

  5. If the server is reachable from the second client, use ping to check connectivity of the first client to other systems on the local net.

    If this command fails, check the networking software configuration on the client, for example, /etc/netmasks and /etc/nsswitch.conf.

  6. (Optional) Check the output of the rpcinfo command.

    If the rpcinfo command does not display program 100003 version 4 ready and waiting, then NFS version 4 is not enabled on the server. See Table 5–3 for information about enabling NFS version 4.

  7. If the software is correct, check the networking hardware.

    Try to move the client onto a second net drop.

ProcedureHow to Check the NFS Server Remotely

Note that support for both the UDP and the MOUNT protocols is not necessary if you are using an NFS version 4 server.

  1. Check that the NFS services have started on the NFS server by typing the following command:


    % rpcinfo -s bee|egrep 'nfs|mountd'
     100003  3,2    tcp,udp,tcp6,upd6                nfs     superuser
     100005  3,2,1  ticots,ticotsord,tcp,tcp6,ticlts,udp,upd6  mountd  superuser

    If the daemons have not been started, see How to Restart NFS Services.

  2. Check that the server's nfsd processes are responding.

    On the client, type the following command to test the UDP NFS connections from the server.


    % /usr/bin/rpcinfo -u bee nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting

    Note –

    NFS version 4 does not support UDP.


    If the server is running, it prints a list of program and version numbers. Using the -t option tests the TCP connection. If this command fails, proceed to How to Verify the NFS Service on the Server.

  3. Check that the server's mountd is responding, by typing the following command.


    % /usr/bin/rpcinfo -u bee mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Using the -t option tests the TCP connection. If either attempt fails, proceed to How to Verify the NFS Service on the Server.

  4. Check the local autofs service if it is being used:


    % cd /net/wasp
    

    Choose a /net or /home mount point that you know should work properly. If this command fails, then as root on the client, type the following to restart the autofs service:


    # svcadm restart system/filesystem/autofs
    
  5. Verify that file system is shared as expected on the server.


    % /usr/sbin/showmount -e bee
    /usr/src										eng
    /export/share/man						(everyone)

    Check the entry on the server and the local mount entry for errors. Also, check the namespace. In this instance, if the first client is not in the eng netgroup, that client cannot mount the /usr/src file system.

    Check all entries that include mounting information in all the local files. The list includes /etc/vfstab and all the /etc/auto_* files.

ProcedureHow to Verify the NFS Service on the Server

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Check that the server can reach the clients.


    # ping lilac
    lilac is alive
  3. If the client is not reachable from the server, ensure that the local name service is running. For NIS+ clients, type the following:


    % /usr/lib/nis/nisping -u
    Last updates for directory eng.acme.com. :
    Master server is eng-master.acme.com.
            Last update occurred at Mon Jun  5 11:16:10 1995
    
    Replica server is eng1-replica-58.acme.com.
            Last Update seen was Mon Jun  5 11:16:10 1995
  4. If the name service is running, check the networking software configuration on the server, for example, /etc/netmasks and /etc/nsswitch.conf.

  5. Type the following command to check whether the rpcbind daemon is running.


    # /usr/bin/rpcinfo -u localhost rpcbind
    program 100000 version 1 ready and waiting
    program 100000 version 2 ready and waiting
    program 100000 version 3 ready and waiting

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. If rpcbind seems to be hung, either reboot the server or follow the steps in How to Warm-Start rpcbind.

  6. Type the following command to check whether the nfsd daemon is running.


    # rpcinfo -u localhost nfs
    program 100003 version 2 ready and waiting
    program 100003 version 3 ready and waiting
    # ps -ef | grep nfsd
    root    232      1  0  Apr 07     ?     0:01 /usr/lib/nfs/nfsd -a 16
    root   3127   2462  1  09:32:57  pts/3  0:00 grep nfsd

    Note –

    NFS version 4 does not support UDP.


    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.

  7. Type the following command to check whether the mountd daemon is running.


    # /usr/bin/rpcinfo -u localhost mountd
    program 100005 version 1 ready and waiting
    program 100005 version 2 ready and waiting
    program 100005 version 3 ready and waiting
    # ps -ef | grep mountd
    root    145      1 0 Apr 07  ?     21:57 /usr/lib/autofs/automountd
    root    234      1 0 Apr 07  ?     0:04  /usr/lib/nfs/mountd
    root   3084 2462 1 09:30:20 pts/3  0:00  grep mountd

    If the server is running, it prints a list of program and version numbers that are associated with the UDP protocol. Also use the -t option with rpcinfo to check the TCP connection. If these commands fail, restart the NFS service. See How to Restart NFS Services.

ProcedureHow to Restart NFS Services

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Restart the NFS service on the server.

    Type the following command.


    # svcadm restart network/nfs/server
    

ProcedureHow to Warm-Start rpcbind

If the NFS server cannot be rebooted because of work in progress, you can restart rpcbind without having to restart all of the services that use RPC. Just complete a warm start by following these steps.

  1. Become superuser or assume an equivalent role.

    Roles contain authorizations and privileged commands. For more information about roles, see Configuring RBAC (Task Map) in System Administration Guide: Security Services. To configure a role with the Primary Administrator profile, see Chapter 2, Working With the Solaris Management Console (Tasks), in System Administration Guide: Basic Administration.

  2. Determine the PID for rpcbind.

    Run ps to get the PID, which is the value in the second column.


    # ps -ef |grep rpcbind
        root   115     1  0   May 31 ?        0:14 /usr/sbin/rpcbind
        root 13000  6944  0 11:11:15 pts/3    0:00 grep rpcbind
  3. Send a SIGTERM signal to the rpcbind process.

    In this example, term is the signal that is to be sent and 115 is the PID for the program (see the kill(1) man page). This command causes rpcbind to create a list of the current registered services in /tmp/portmap.file and /tmp/rpcbind.file.


    # kill -s term 115
    

    Note –

    If you do not kill the rpcbind process with the -s term option, you cannot complete a warm start of rpcbind. You must reboot the server to restore service.


  4. Restart rpcbind.

    Warm-restart the command so that the files that were created by the kill command are consulted. A warm start also ensures that the process resumes without requiring a restart of all the RPC services. See the rpcbind(1M) man page.


    # /usr/sbin/rpcbind -w
    

Identifying Which Host Is Providing NFS File Service

Run the nfsstat command with the -m option to gather current NFS information. The name of the current server is printed after “currserver=”.


% nfsstat -m
/usr/local from bee,wasp:/export/share/local
 Flags: vers=3,proto=tcp,sec=sys,hard,intr,llock,link,synlink,
		acl,rsize=32768,wsize=32678,retrans=5
 Failover: noresponse=0, failover=0, remap=0, currserver=bee

ProcedureHow to Verify Options Used With the mount Command

In the Solaris 2.6 release and in any versions of the mount command that were patched after the 2.6 release, no warning is issued for invalid options. The following procedure helps determine whether the options that were supplied either on the command line or through /etc/vfstab were valid.

For this example, assume that the following command has been run:


# mount -F nfs -o ro,vers=2 bee:/export/share/local /mnt
  1. Verify the options by running the following command.


    % nfsstat -m
    /mnt from bee:/export/share/local
    Flags:  vers=2,proto=tcp,sec=sys,hard,intr,dynamic,acl,rsize=8192,wsize=8192,
            retrans=5

    The file system from bee has been mounted with the protocol version set to 2. Unfortunately, the nfsstat command does not display information about all of the options. However, using the nfsstat command is the most accurate way to verify the options.

  2. Check the entry in /etc/mnttab.

    The mount command does not allow invalid options to be added to the mount table. Therefore, verify that the options that are listed in the file match those options that are listed on the command line. In this way, you can check those options that are not reported by the nfsstat command.


    # grep bee /etc/mnttab
    bee:/export/share/local /mnt nfs	ro,vers=2,dev=2b0005e 859934818

Troubleshooting Autofs

Occasionally, you might encounter problems with autofs. This section should improve the problem-solving process. The section is divided into two subsections.

This section presents a list of the error messages that autofs generates. The list is divided into two parts:

Each error message is followed by a description and probable cause of the message.

When troubleshooting, start the autofs programs with the verbose (-v) option. Otherwise, you might experience problems without knowing the cause.

The following paragraphs are labeled with the error message you are likely to see if autofs fails, and a description of the possible problem.

Error Messages Generated by automount -v


bad key key in direct map mapname

Description:

While scanning a direct map, autofs has found an entry key without a prefixed /.

Solution:

Keys in direct maps must be full path names.


bad key key in indirect map mapname

Description:

While scanning an indirect map, autofs has found an entry key that contains a /.

Solution:

Indirect map keys must be simple names, not path names.


can't mount server:pathname: reason

Description:

The mount daemon on the server refuses to provide a file handle for server:pathname.

Solution:

Check the export table on the server.


couldn't create mount point mountpoint: reason

Description:

Autofs was unable to create a mount point that was required for a mount. This problem most frequently occurs when you attempt to hierarchically mount all of a server's exported file systems.

Solution:

A required mount point can exist only in a file system that cannot be mounted, which means the file system cannot be exported. The mount point cannot be created because the exported parent file system is exported read-only.


leading space in map entry entry text in mapname

Description:

Autofs has discovered an entry in an automount map that contains leading spaces. This problem is usually an indication of an improperly continued map entry. For example:


fake
/blat   		frobz:/usr/frotz 
Solution:

In this example, the warning is generated when autofs encounters the second line because the first line should be terminated with a backslash (\).


mapname: Not found

Description:

The required map cannot be located. This message is produced only when the -v option is used.

Solution:

Check the spelling and path name of the map name.


remount server:pathname on mountpoint: server not responding

Description:

Autofs has failed to remount a file system that it previously unmounted.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


WARNING: mountpoint already mounted on

Description:

Autofs is attempting to mount over an existing mount point. This message means that an internal error occurred in autofs (an anomaly).

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.

Miscellaneous Error Messages


dir mountpoint must start with '/'

Solution:

The automounter mount point must be given as a full path name. Check the spelling and path name of the mount point.


hierarchical mountpoint: pathname1 and pathname2

Solution:

Autofs does not allow its mount points to have a hierarchical relationship. An autofs mount point must not be contained within another automounted file system.


host server not responding

Description:

Autofs attempted to contact server, but received no response.

Solution:

Check the NFS server status.


hostname: exports: rpc-err

Description:

An error occurred while getting the export list from hostname. This message indicates a server or network problem.

Solution:

Check the NFS server status.


map mapname, key key: bad

Description:

The map entry is malformed, and autofs cannot interpret the entry.

Solution:

Recheck the entry. Perhaps the entry has characters that need to be escaped.


mapname: nis-err

Description:

An error occurred when looking up an entry in a NIS map. This message can indicate NIS problems.

Solution:

Check the NIS server status.


mount of server:pathname on mountpoint:reason

Description:

Autofs failed to do a mount. This occurrence can indicate a server or network problem. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


mountpoint: Not a directory

Description:

Autofs cannot mount itself on mountpoint because it is not a directory.

Solution:

Check the spelling and path name of the mount point.


nfscast: cannot send packet: reason

Description:

Autofs cannot send a query packet to a server in a list of replicated file system locations. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


nfscast: cannot receive reply: reason

Description:

Autofs cannot receive replies from any of the servers in a list of replicated file system locations. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


nfscast: select: reason

Description:

All these error messages indicate problems in attempting to check servers for a replicated file system. This message can indicate a network problem. The reason string defines the problem.

Solution:

Contact Sun for assistance. This error message is extremely rare and has no straightforward solution.


pathconf: no info for server:pathname

Description:

Autofs failed to get pathconf information for the path name.

Solution:

See the fpathconf(2) man page.


pathconf: server: server not responding

Description:

Autofs is unable to contact the mount daemon on server that provides the information to pathconf().

Solution:

Avoid using the POSIX mount option with this server.

Other Errors With Autofs

If the /etc/auto* files have the execute bit set, the automounter tries to execute the maps, which creates messages such as the following:

/etc/auto_home: +auto_home: not found

In this situation, the auto_home file has incorrect permissions. Each entry in the file generates an error message that is similar to this message. The permissions to the file should be reset by typing the following command:


# chmod 644 /etc/auto_home

NFS Error Messages

This section shows an error message that is followed by a description of the conditions that should create the error and at minimum one remedy.


Bad argument specified with index option - must be a file

Solution:

You must include a file name with the index option. You cannot use directory names.


Cannot establish NFS service over /dev/tcp: transport setup problem

Description:

This message is often created when the services information in the namespace has not been updated. The message can also be reported for UDP.

Solution:

To fix this problem, you must update the services data in the namespace. For NIS+, the entries should be as follows:


nfsd nfsd tcp 2049 NFS server daemon
nfsd nfsd udp 2049 NFS server daemon

For NIS and /etc/services, the entries should be as follows:


nfsd    2049/tcp    nfs    # NFS server daemon
nfsd    2049/udp    nfs    # NFS server daemon

Cannot use index option without public option

Solution:

Include the public option with the index option in the share command. You must define the public file handle in order for the index option to work.


Note –

The Solaris 2.5.1 release required that the public file handle be set by using the share command. A change in the Solaris 2.6 release sets the public file handle to be root (/) by default. This error message is no longer relevant.



Could not start daemon: error

Description:

This message is displayed if the daemon terminates abnormally or if a system call error occurs. The error string defines the problem.

Solution:

Contact Sun for assistance. This error message is rare and has no straightforward solution.


Could not use public filehandle in request to server

Description:

This message is displayed if the public option is specified but the NFS server does not support the public file handle. In this situation, the mount fails.

Solution:

To remedy this situation, either try the mount request without using the public file handle or reconfigure the NFS server to support the public file handle.


daemon running already with pid pid

Description:

The daemon is already running.

Solution:

If you want to run a new copy, kill the current version and start a new version.


error locking lock file

Description:

This message is displayed when the lock file that is associated with a daemon cannot be locked properly.

Solution:

Contact Sun for assistance. This error message is rare and has no straightforward solution.


error checking lock file: error

Description:

This message is displayed when the lock file that is associated with a daemon cannot be opened properly.

Solution:

Contact Sun for assistance. This error message is rare and has no straightforward solution.


NOTICE: NFS3: failing over from host1 to host2

Description:

This message is displayed on the console when a failover occurs. The message is advisory only.

Solution:

No action required.


filename: File too large

Description:

An NFS version 2 client is trying to access a file that is over 2 Gbytes.

Solution:

Avoid using NFS version 2. Mount the file system with version 3 or version 4. Also, see the description of the nolargefiles option in mount Options for NFS File Systems.


mount: ... server not responding:RPC_PMAP_FAILURE - RPC_TIMED_OUT

Description:

The server that is sharing the file system you are trying to mount is down or unreachable, at the wrong run level, or its rpcbind is dead or hung.

Solution:

Wait for the server to reboot. If the server is hung, reboot the server.


mount: ... server not responding: RPC_PROG_NOT_REGISTERED

Description:

The mount request registered with rpcbind, but the NFS mount daemon mountd is not registered.

Solution:

Wait for the server to reboot. If the server is hung, reboot the server.


mount: ... No such file or directory

Description:

Either the remote directory or the local directory does not exist.

Solution:

Check the spelling of the directory names. Run ls on both directories.


mount: ...: Permission denied

Description:

Your computer name might not be in the list of clients or netgroup that is allowed access to the file system you tried to mount.

Solution:

Use showmount -e to verify the access list.


NFS file temporarily unavailable on the server, retrying ...

Description:

An NFS version 4 server can delegate the management of a file to a client. This message indicates that the server is recalling a delegation for another client that conflicts with a request from your client.

Solution:

The recall must occur before the server can process your client's request. For more information about delegation, refer to Delegation in NFS Version 4.


NFS fsstat failed for server hostname: RPC: Authentication error

Description:

This error can be caused by many situations. One of the most difficult situations to debug is when this problem occurs because a user is in too many groups. Currently, a user can be in no more than 16 groups if the user is accessing files through NFS mounts.

Solution:

An alternate does exist for users who need to be in more than 16 groups. You can use access control lists to provide the needed access privileges if you run at minimum the Solaris 2.5 release on the NFS server and the NFS clients.


nfs mount: ignoring invalid option “-option

Description:

The -option flag is not valid.

Solution:

Refer to the mount_nfs(1M) man page to verify the required syntax.


Note –

This error message is not displayed when running any version of the mount command that is included in a Solaris release from 2.6 to the current release or in earlier versions that have been patched.



nfs mount: NFS can't support “nolargefiles”

Description:

An NFS client has attempted to mount a file system from an NFS server by using the -nolargefiles option.

Solution:

This option is not supported for NFS file system types.


nfs mount: NFS V2 can't support “largefiles”

Description:

The NFS version 2 protocol cannot handle large files.

Solution:

You must use version 3 or version 4 if access to large files is required.


NFS server hostname not responding still trying

Description:

If programs hang while doing file-related work, your NFS server might have failed. This message indicates that NFS server hostname is down or that a problem has occurred with the server or the network.

Solution:

If failover is being used, hostname is a list of servers. Start troubleshooting with How to Check Connectivity on an NFS Client.


NFS server recovering

Description:

During part of the NFS version 4 server reboot, some operations were not permitted. This message indicates that the client is waiting for the server to permit this operation to proceed.

Solution:

No action required. Wait for the server to permit the operation.


Permission denied

Description:

This message is displayed by the ls -l, getfacl, and setfacl commands for the following reasons:

  • If the user or group that exists in an access control list (ACL) entry on an NFS version 4 server cannot be mapped to a valid user or group on an NFS version 4 client, the user is not allowed to read the ACL on the client.

  • If the user or group that exists in an ACL entry that is being set on an NFS version 4 client cannot be mapped to a valid user or group on an NFS version 4 server, the user is not allowed to write or modify an ACL on the client.

  • If an NFS version 4 client and server have mismatched NFSMAPID_DOMAIN values, ID mapping fails.

For more information, see ACLs and nfsmapid in NFS Version 4.

Solution:

Do the following:

  • Make sure that all user and group IDs in the ACL entries exist on both the client and server.

  • Make sure that the value for NFSMAPID_DOMAIN is set correctly in the /etc/default/nfs file. For more information, see Keywords for the /etc/default/nfs File.

To determine if any user or group cannot be mapped on the server or client, use the script that is provided in Checking for Unmapped User or Group IDs.


port number in nfs URL not the same as port number in port option

Description:

The port number that is included in the NFS URL must match the port number that is included with the -port option to mount. If the port numbers do not match, the mount fails.

Solution:

Either change the command to make the port numbers identical or do not specify the port number that is incorrect. Usually, you do not need to specify the port number with both the NFS URL and the -port option.


replicas must have the same version

Description:

For NFS failover to function properly, the NFS servers that are replicas must support the same version of the NFS protocol.

Solution:

Running multiple versions is not allowed.


replicated mounts must be read-only

Description:

NFS failover does not work on file systems that are mounted read-write. Mounting the file system read-write increases the likelihood that a file could change.

Solution:

NFS failover depends on the file systems being identical.


replicated mounts must not be soft

Description:

Replicated mounts require that you wait for a timeout before failover occurs.

Solution:

The soft option requires that the mount fail immediately when a timeout starts, so you cannot include the -soft option with a replicated mount.


share_nfs: Cannot share more than one filesystem with 'public' option

Solution:

Check that the /etc/dfs/dfstab file has only one file system selected to be shared with the -public option. Only one public file handle can be established per server, so only one file system per server can be shared with this option.


WARNING: No network locking on hostname:path: contact admin to install server change

Description:

An NFS client has unsuccessfully attempted to establish a connection with the network lock manager on an NFS server. Rather than fail the mount, this warning is generated to warn you that locking does not work.

Solution:

Upgrade the server with a new version of the OS that provides complete lock manager support.

Chapter 6 Accessing Network File Systems (Reference)

This chapter describes the NFS commands, as well as the different parts of the NFS environment and how these parts work together.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Virtualization Using the Solaris Operating System for more information.


NFS Files

You need several files to support NFS activities on any computer. Many of these files are ASCII, but some of the files are data files. Table 6–1 lists these files and their functions.

Table 6–1 NFS Files

File Name 

Function 

/etc/default/autofs

Lists configuration information for the autofs environment. 

/etc/default/fs

Lists the default file-system type for local file systems. 

/etc/default/nfs

Lists configuration information for lockd and nfsd. For more information, refer to Keywords for the /etc/default/nfs File and the nfs(4) man page.

/etc/default/nfslogd

Lists configuration information for the NFS server logging daemon, nfslogd.

/etc/dfs/dfstab

Lists the local resources to be shared. 

/etc/dfs/fstypes

Lists the default file-system types for remote file systems. 

/etc/dfs/sharetab

Lists the local and remote resources that are shared. See the sharetab(4) man page. Do not edit this file.

/etc/mnttab

Lists file systems that are currently mounted, including automounted directories. See the mnttab(4) man page. Do not edit this file.

/etc/netconfig

Lists the transport protocols. Do not edit this file.

/etc/nfs/nfslog.conf

Lists general configuration information for NFS server logging. 

/etc/nfs/nfslogtab

Lists information for log postprocessing by nfslogd. Do not edit this file.

/etc/nfssec.conf

Lists NFS security services. 

/etc/rmtab

Lists file systems that are remotely mounted by NFS clients. See the rmtab(4) man page. Do not edit this file.

/etc/vfstab

Defines file systems to be mounted locally. See the vfstab(4) man page.

The first entry in /etc/dfs/fstypes is often used as the default file-system type for remote file systems. This entry defines the NFS file-system type as the default.

Only one entry is in /etc/default/fs: the default file-system type for local disks. You can determine the file-system types that are supported on a client or server by checking the files in /kernel/fs.

/etc/default/autofs File

Starting in the Solaris 10 release, you can use the /etc/default/autofs file to configure your autofs environment. Specifically, this file provides an additional way to configure your autofs commands and autofs daemons. The same specifications you would make on the command line can be made in this configuration file. However, unlike the specifications you would make on the command line, this file preserves your specifications, even during upgrades to your operating system. Additionally, you are no longer required to update critical startup files to ensure that the existing behavior of your autofs environment is preserved. You can make your specifications by providing values for the following keywords:

AUTOMOUNT_TIMEOUT

Sets the duration for a file system to remain idle before the file system is unmounted. This keyword is the equivalent of the -t argument for the automount command. The default value is 600.

AUTOMOUNT_VERBOSE

Provides notification of autofs mounts, unmounts, and other nonessential events. This keyword is the equivalent of the -v argument for automount. The default value is FALSE.

AUTOMOUNTD_VERBOSE

Logs status messages to the console and is the equivalent of the -v argument for the automountd daemon. The default value is FALSE.

AUTOMOUNTD_NOBROWSE

Turns browsing on or off for all autofs mount points and is the equivalent of the -n argument for automountd. The default value is FALSE.

AUTOMOUNTD_TRACE

Expands each remote procedure call (RPC) and displays the expanded RPC on standard output. This keyword is the equivalent of the -T argument for automountd. The default value is 0. Values can range from 0 to 5.

AUTOMOUNTD_ENV

Permits you to assign different values to different environments. This keyword is the equivalent of the -D argument for automountd. The AUTOMOUNTD_ENV keyword can be used multiple times. However, you must use separate lines for each environment assignment.

For more information, refer to the man pages for automount(1M) and automountd(1M). For procedural information, refer to How to Use the /etc/default/autofs File.

Keywords for the /etc/default/nfs File

In NFS version 4, the following keywords can be set in the /etc/default/nfs file. These keywords control the NFS protocols that are used by both the client and server.

NFS_SERVER_VERSMIN

Sets the minimum version of the NFS protocol to be registered and offered by the server. Starting in the Solaris 10 release, the default is 2. Other valid values include 3 or 4. Refer to Setting Up NFS Services.

NFS_SERVER_VERSMAX

Sets the maximum version of the NFS protocol to be registered and offered by the server. Starting in the Solaris 10 release, the default is 4. Other valid values include 2 or 3. Refer to Setting Up NFS Services.

NFS_CLIENT_VERSMIN

Sets the minimum version of the NFS protocol to be used by the NFS client. Starting in the Solaris 10 release, the default is 2. Other valid values include 3 or 4. Refer to Setting Up NFS Services.

NFS_CLIENT_VERSMAX

Sets the maximum version of the NFS protocol to be used by the NFS client. Starting in the Solaris 10 release, the default is 4. Other valid values include 2 or 3. Refer to Setting Up NFS Services.

NFS_SERVER_DELEGATION

Controls whether the NFS version 4 delegation feature is enabled for the server. If this feature is enabled, the server attempts to provide delegations to the NFS version 4 client. By default, server delegation is enabled. To disable server delegation, see How to Select Different Versions of NFS on a Server. For more information, refer to Delegation in NFS Version 4.

NFSMAPID_DOMAIN

Sets a common domain for clients and servers. Overrides the default behavior of using the local DNS domain name. For task information, refer to Setting Up NFS Services. Also, see nfsmapid Daemon.

/etc/default/nfslogd File

This file defines some of the parameters that are used when using NFS server logging. The following parameters can be defined.

CYCLE_FREQUENCY

Determines the number of hours that must pass before the log files are cycled. The default value is 24 hours. This option is used to prevent the log files from growing too large.

IDLE_TIME

Sets the number of seconds nfslogd should sleep before checking for more information in the buffer file. This parameter also determines how often the configuration file is checked. This parameter, along with MIN_PROCESSING_SIZE, determines how often the buffer file is processed. The default value is 300 seconds. Increasing this number can improve performance by reducing the number of checks.

MAPPING_UPDATE_INTERVAL

Specifies the number of seconds between updates of the records in the file-handle-to-path mapping tables. The default value is 86400 seconds or one day. This parameter helps keep the file-handle-to-path mapping tables up-to-date without having to continually update the tables.

MAX_LOGS_PRESERVE

Determines the number of log files to be saved. The default value is 10.

MIN_PROCESSING_SIZE

Sets the minimum number of bytes that the buffer file must reach before processing and writing to the log file. This parameter, along with IDLE_TIME, determines how often the buffer file is processed. The default value is 524288 bytes. Increasing this number can improve performance by reducing the number of times the buffer file is processed.

PRUNE_TIMEOUT

Selects the number of hours that must pass before a file-handle-to-path mapping record times out and can be reduced. The default value is 168 hours or 7 days.

UMASK

Specifies the file mode creation mask for the log files that are created by nfslogd. The default value is 0137.

/etc/nfs/nfslog.conf File

This file defines the path, file names, and type of logging to be used by nfslogd. Each definition is associated with a tag. Starting NFS server logging requires that you identify the tag for each file system. The global tag defines the default values. You can use the following parameters with each tag as needed.

defaultdir=path

Specifies the default directory path for the logging files. Unless you specify differently, the default directory is /var/nfs.

log=path/filename

Sets the path and file name for the log files. The default is /var/nfs/nfslog.

fhtable=path/filename

Selects the path and file name for the file-handle-to-path database files. The default is /var/nfs/fhtable.

buffer=path/filename

Determines the path and file name for the buffer files. The default is /var/nfs/nfslog_workbuffer.

logformat=basic|extended

Selects the format to be used when creating user-readable log files. The basic format produces a log file that is similar to some ftpd daemons. The extended format gives a more detailed view.

If the path is not specified, the path that is defined by defaultdir is used. Also, you can override defaultdir by using an absolute path.

To identify the files more easily, place the files in separate directories. Here is an example of the changes that are needed.


% cat /etc/nfs/nfslog.conf
#ident  "@(#)nfslog.conf        1.5     99/02/21 SMI"
#
  .
  .
# NFS server log configuration file.
#

global  defaultdir=/var/nfs \
        log=nfslog fhtable=fhtable buffer=nfslog_workbuffer

publicftp log=logs/nfslog fhtable=fh/fhtables buffer=buffers/workbuffer

In this example, any file system that is shared with log=publicftp uses the following values:

For procedural information, refer to How to Enable NFS Server Logging.

NFS Daemons

To support NFS activities, several daemons are started when a system goes into run level 3 or multiuser mode. The mountd and nfsd daemons are run on systems that are servers. The automatic startup of the server daemons depends on the existence of entries that are labeled with the NFS file-system type in /etc/dfs/sharetab. To support NFS file locking, the lockd and statd daemons are run on NFS clients and servers. However, unlike previous versions of NFS, in NFS version 4, the daemons lockd, statd, mountd, and nfslogd are not used.

This section describes the following daemons.

automountd Daemon

This daemon handles the mounting and unmounting requests from the autofs service. The syntax of the command is as follows:

automountd [ -Tnv ] [ -D name=value ]

The command behaves in the following ways:

The default value for the automount map is /etc/auto_master. Use the -T option for troubleshooting.

lockd Daemon

This daemon supports record-locking operations on NFS files. The lockd daemon manages RPC connections between the client and the server for the Network Lock Manager (NLM) protocol. The daemon is normally started without any options. You can use three options with this command. See the lockd(1M) man page. These options can either be used from the command line or by editing the appropriate string in /etc/default/nfs. The following are descriptions of keywords that can be set in the /etc/default/nfs file.


Note –

Starting in the Solaris 10 release, the LOCKD_GRACE_PERIOD keyword and the -g option have been deprecated. The deprecated keyword is replaced with the new keyword GRACE_PERIOD. If both keywords are set, the value for GRACE_PERIOD overrides the value for LOCKD_GRACE_PERIOD. See the description of GRACE_PERIOD that follows.


Like LOCKD_GRACE_PERIOD, GRACE_PERIOD=graceperiod in /etc/default/nfs sets the number of seconds after a server reboot that the clients have to reclaim both NFS version 3 locks, provided by NLM, and version 4 locks. Thus, the value for GRACE_PERIOD controls the length of the grace period for lock recovery, for both NFS version 3 and NFS version 4.

The LOCKD_RETRANSMIT_TIMEOUT=timeout parameter in /etc/default/nfs selects the number of seconds to wait before retransmitting a lock request to the remote server. This option affects the NFS client-side service. The default value for timeout is 15 seconds. Decreasing the timeout value can improve response time for NFS clients on a “noisy” network. However, this change can cause additional server load by increasing the frequency of lock requests. The same parameter can be used from the command line by starting the daemon with the -t timeout option.

The LOCKD_SERVERS=nthreads parameter in /etc/default/nfs specifies the maximum number of concurrent threads that the server handles per connection. Base the value for nthreads on the load that is expected on the NFS server. The default value is 20. Each NFS client that uses TCP uses a single connection with the NFS server. Therefore, each client can use a maximum of 20 concurrent threads on the server.

All NFS clients that use UDP share a single connection with the NFS server. Under these conditions, you might have to increase the number of threads that are available for the UDP connection. A minimum calculation would be to allow two threads for each UDP client. However, this number is specific to the workload on the client, so two threads per client might not be sufficient. The disadvantage to using more threads is that when the threads are used, more memory is used on the NFS server. If the threads are never used, however, increasing nthreads has no effect. The same parameter can be used from the command line by starting the daemon with the nthreads option.

mountd Daemon

This daemon handles file-system mount requests from remote systems and provides access control. The mountd daemon checks /etc/dfs/sharetab to determine which file systems are available for remote mounting and which systems are allowed to do the remote mounting. You can use the -v option and the -r option with this command. See the mountd(1M) man page.

The -v option runs the command in verbose mode. Every time an NFS server determines the access that a client should be granted, a message is printed on the console. The information that is generated can be useful when trying to determine why a client cannot access a file system.

The -r option rejects all future mount requests from clients. This option does not affect clients that already have a file system mounted.


Note –

NFS version 4 does not use this daemon.


nfs4cbd Daemon

nfs4cbd, which is for the exclusive use of the NFS version 4 client, manages the communication endpoints for the NFS version 4 callback program. The daemon has no user-accessible interface. For more information, see the nfs4cbd(1M) man page.

nfsd Daemon

This daemon handles other client file-system requests. You can use several options with this command. See the nfsd(1M) man page for a complete listing. These options can either be used from the command line or by editing the appropriate string in /etc/default/nfs.

The NFSD_LISTEN_BACKLOG=length parameter in /etc/default/nfs sets the length of the connection queue over connection-oriented transports for NFS and TCP. The default value is 32 entries. The same selection can be made from the command line by starting nfsd with the -l option.

The NFSD_MAX_CONNECTIONS=#-conn parameter in /etc/default/nfs selects the maximum number of connections per connection-oriented transport. The default value for #-conn is unlimited. The same parameter can be used from the command line by starting the daemon with the -c #-conn option.

The NFSD_SERVER=nservers parameter in /etc/default/nfs selects the maximum number of concurrent requests that a server can handle. The default value for nservers is 16. The same selection can be made from the command line by starting nfsd with the nservers option.

Unlike older versions of this daemon, nfsd does not spawn multiple copies to handle concurrent requests. Checking the process table with ps only shows one copy of the daemon running.

nfslogd Daemon

This daemon provides operational logging. NFS operations that are logged against a server are based on the configuration options that are defined in /etc/default/nfslogd. When NFS server logging is enabled, records of all RPC operations on a selected file system are written to a buffer file by the kernel. Then nfslogd postprocesses these requests. The name service switch is used to help map UIDs to logins and IP addresses to host names. The number is recorded if no match can be found through the identified name services.

Mapping of file handles to path names is also handled by nfslogd. The daemon tracks these mappings in a file-handle-to-path mapping table. One mapping table exists for each tag that is identified in /etc/nfs/nfslogd. After post-processing, the records are written to ASCII log files.


Note –

NFS version 4 does not use this daemon.


nfsmapid Daemon

Version 4 of the NFS protocol (RFC3530) changed the way user or group identifiers (UID or GID) are exchanged between the client and server. The protocol requires that a file's owner and group attributes be exchanged between an NFS version 4 client and an NFS version 4 server as strings in the form of user@nfsv4_domain or group@nfsv4_domain, respectively.

For example, user known_user has a UID 123456 on an NFS version 4 client whose fully qualified hostname is system.example.com. For the client to make requests to the NFS version 4 server, the client must map the UID 123456 to known_user@example.com and then send this attribute to the NFS version 4 server. The NFS version 4 server expects to receive user and group file attributes in the user_or_group@nfsv4_domain format. After the server receives known_user@example.com from the client, the server maps the string to the local UID 123456, which is understood by the underlying file system. This functionality assumes that every UID and GID in the network is unique and that the NFS version 4 domains on the client match the NFS version 4 domains on the server.


Note –

If the server does not recognize the given user or group name, even if the NFS version 4 domains match, the server is unable to map the user or group name to its unique ID, an integer value. Under such circumstances, the server maps the inbound user or group name to the nobody user. To prevent such occurrences, administrators should avoid making special accounts that only exist on the NFS version 4 client.


The NFS version 4 client and server are both capable of performing integer-to-string and string-to-integer conversions. For example, in response to a GETATTR operation, the NFS version 4 server maps UIDs and GIDs obtained from the underlying file system into their respective string representation and sends this information to the client. Alternately, the client must also map UIDs and GIDs into string representations. For example, in response to the chown command, the client maps the new UID or GID to a string representation before sending a SETATTR operation to the server.

Note, however, that the client and server respond differently to unrecognized strings:

Configuration Files and nfsmapid

The following describes how the nfsmapid daemon uses the /etc/nsswitch.conf and /etc/resolv.conf files:

Precedence Rules

    For nfsmapid to work properly, NFS version 4 clients and servers must have the same domain. To ensure matching NFS version 4 domains, nfsmapid follows these strict precedence rules:

  1. The daemon first checks the /etc/default/nfs file for a value that has been assigned to the NFSMAPID_DOMAIN keyword. If a value is found, the assigned value takes precedence over any other settings. The assigned value is appended to the outbound attribute strings and is compared against inbound attribute strings. For more information about keywords in the /etc/default/nfs file, see Keywords for the /etc/default/nfs File. For procedural information, see Setting Up NFS Services.


    Note –

    The use of the NFSMAPID_DOMAIN setting is not scalable and is not recommended for large deployments.


  2. If no value has been assigned to NFSMAPID_DOMAIN, then the daemon checks for a domain name from a DNS TXT RR. nfsmapid relies on directives in the /etc/resolv.conf file that are used by the set of routines in the resolver. The resolver searches through the configured DNS servers for the _nfsv4idmapdomain TXT RR. Note that the use of DNS TXT records is more scalable. For this reason, continued use of TXT records is much preferred over setting the keyword in the /etc/default/nfs file.

  3. If no DNS TXT record provides a domain name, then by default the nfsmapid daemon uses the configured DNS domain.

  4. If the /etc/resolv.conf file does not exist, nfsmapid obtains the NFS version 4 domain name by following the behavior of the domainname command. Specifically, if the /etc/defaultdomain file exists, nfsmapid uses the contents of that file for the NFS version 4 domain. If the /etc/defaultdomain file does not exist, nfsmapid uses the domain name that is provided by the network's configured naming service. For more information, see the domainname(1M) man page.

nfsmapid and DNS TXT Records

The ubiquitous nature of DNS provides an efficient storage and distribution mechanism for the NFS version 4 domain name. Additionally, because of the inherent scalability of DNS, the use of DNS TXT resource records is the preferred method for configuring the NFS version 4 domain name for large deployments. You should configure the _nfsv4idmapdomain TXT record on enterprise-level DNS servers. Such configurations ensure that any NFS version 4 client or server can find its NFS version 4 domain by traversing the DNS tree.

The following is an example of a preferred entry for enabling the DNS server to provide the NFS version 4 domain name:


_nfsv4idmapdomain		IN		TXT			"foo.bar"

In this example, the domain name to configure is the value that is enclosed in double-quotes. Note that no ttl field is specified and that no domain is appended to _nfsv4idmapdomain, which is the value in the owner field. This configuration enables the TXT record to use the zone's ${ORIGIN} entry from the Start-Of-Authority (SOA) record. For example, at different levels of the domain namespace, the record could read as follows:


_nfsv4idmapdomain.subnet.yourcorp.com.    IN    TXT    "foo.bar"
_nfsv4idmapdomain.yourcorp.com.           IN    TXT    "foo.bar"

This configuration provides DNS clients with the added flexibility of using the resolv.conf file to search up the DNS tree hierarchy. See the resolv.conf(4) man page. This capability provides a higher probability of finding the TXT record. For even more flexibility, lower level DNS sub-domains can define their own DNS TXT resource records (RRs). This capability enables lower level DNS sub-domains to override the TXT record that is defined by the top level DNS domain.


Note –

The domain that is specified by the TXT record can be an arbitrary string that does not necessarily match the DNS domain for clients and servers that use NFS version 4. You have the option of not sharing NFS version 4 data with other DNS domains.


Checking for the NFS Version 4 Domain

Before assigning a value for your network's NFS version 4 domain, check to see if an NFS version 4 domain has already been configured for your network. The following examples provide ways of identifying your network's NFS version 4 domain.

For more information, see the following man pages:

Configuring the NFS Version 4 Default Domain

This section describes how the network obtains the desired default domain:

Configuring an NFS Version 4 Default Domain in the Solaris Express 5/06 Release

In the initial Solaris 10 release, the domain was defined during the first system reboot after installing the OS. In the Solaris Express 5/06 release, the NFS version 4 domain is defined during the installation of the OS. To provide this functionality, the following features have been added:

    The following describes how the functionality operates:

  1. The sysidnfs4 program checks the /etc/.sysIDtool.state file to determine whether an NFS version 4 domain has been identified.

    • If the .sysIDtool.state file shows that an NFS version 4 domain has been configured for the network, the sysidnfs4 program makes no further checks. See the following example of a .sysIDtool.state file:


      1       # System previously configured?
      1       # Bootparams succeeded?
      1       # System is on a network?
      1       # Extended network information gathered?
      1       # Autobinder succeeded?
      1       # Network has subnets?
      1       # root password prompted for?
      1       # locale and term prompted for?
      1       # security policy in place
      1       # NFSv4 domain configured
      xterms

      The 1 that appears before # NFSv4 domain configured confirms that the NFS version 4 domain has been configured.

    • If the .sysIDtool.state file shows that no NFS version 4 domain has been configured for the network, the sysidnfs4 program must make further checks. See the following example of a .sysIDtool.state file:


      1       # System previously configured?
      1       # Bootparams succeeded?
      1       # System is on a network?
      1       # Extended network information gathered?
      1       # Autobinder succeeded?
      1       # Network has subnets?
      1       # root password prompted for?
      1       # locale and term prompted for?
      1       # security policy in place
      0       # NFSv4 domain configured
      xterms

      The 0 that appears before # NFSv4 domain configured confirms that no NFS version 4 domain has been configured.

  2. If no NFS version 4 domain has been identified, the sysidnfs4 program checks the nfs4_domain keyword in the sysidcfg file.

    • If a value for nfs4_domain exists, that value is assigned to the NFSMAPID_DOMAIN keyword in the /etc/default/nfs file. Note that any value assigned to NFSMAPID_DOMAIN overrides the dynamic domain selection capability of the nfsmapid daemon. For more information about the dynamic domain selection capability of nfsmapid, see Precedence Rules.

    • If no value for nfs4_domain exists, the sysidnfs4 program identifies the domain that nfsmapid derives from the operating system's configured name services. This derived value is presented as a default domain at an interactive prompt that gives you the option of accepting the default value or assigning a different NFS version 4 domain.

This functionality makes the following obsolete:


Note –

Because of the inherent ubiquitous and scalable nature of DNS, the use of DNS TXT records for configuring the domain of large NFS version 4 deployments continues to be preferred and strongly encouraged. See nfsmapid and DNS TXT Records.


For specific information about the Solaris installation process, see the following:

Configuring an NFS Version 4 Default Domain in the Solaris 10 Release

In the initial Solaris 10 release of NFS version 4, if your network includes multiple DNS domains, but only has a single UID and GID namespace, all clients must use one value for NFSMAPID_DOMAIN. For sites that use DNS, nfsmapid resolves this issue by obtaining the domain name from the value that you assigned to _nfsv4idmapdomain. For more information, see nfsmapid and DNS TXT Records. If your network is not configured to use DNS, during the first system boot the Solaris OS uses the sysidconfig(1M) utility to provide the following prompts for an NFS version 4 domain name:


This system is configured with NFS version 4, which uses a 
domain name that is automatically derived from the system's 
name services. The derived domain name is sufficient for most 
configurations. In a few cases, mounts that cross different 
domains might cause files to be owned by nobody due to the 
lack of a common domain name.

Do you need to override the system's default NFS verion 4 domain 
name (yes/no)? [no]

The default response is [no]. If you choose [no], you see the following:


For more information about how the NFS version 4 default domain name is 
derived and its impact, refer to the man pages for nfsmapid(1M) and 
nfs(4), and the System Administration Guide: Network Services.

If you choose [yes], you see this prompt:


Enter the domain to be used as the NFS version 4 domain name.
NFS version 4 domain name []:

Note –

If a value for NFSMAPID_DOMAIN exists in /etc/default/nfs, the [domain_name] that you provide overrides that value.


Additional Information About nfsmapid

For more information about nfsmapid, see the following:

statd Daemon

This daemon works with lockd to provide crash and recovery functions for the lock manager. The statd daemon tracks the clients that hold locks on an NFS server. If a server crashes, on rebooting statd on the server contacts statd on the client. The client statd can then attempt to reclaim any locks on the server. The client statd also informs the server statd when a client has crashed so that the client's locks on the server can be cleared. You have no options to select with this daemon. For more information, see the statd(1M) man page.

In the Solaris 7 release, the way that statd tracks the clients has been improved. In all earlier Solaris releases, statd created files in /var/statmon/sm for each client by using the client's unqualified host name. This file naming caused problems if you had two clients in different domains that shared a host name, or if clients were not resident in the same domain as the NFS server. Because the unqualified host name only lists the host name, without any domain or IP-address information, the older version of statd had no way to differentiate between these types of clients. To fix this problem, the Solaris 7 statd creates a symbolic link in /var/statmon/sm to the unqualified host name by using the IP address of the client. The new link resembles the following:


# ls -l /var/statmon/sm
lrwxrwxrwx   1 daemon          11 Apr 29 16:32 ipv4.192.168.255.255 -> myhost
lrwxrwxrwx   1 daemon          11 Apr 29 16:32 ipv6.fec0::56:a00:20ff:feb9:2734 -> v6host
--w-------   1 daemon          11 Apr 29 16:32 myhost
--w-------   1 daemon          11 Apr 29 16:32 v6host

In this example, the client host name is myhost and the client's IP address is 192.168.255.255. If another host with the name myhost were mounting a file system, two symbolic links would lead to the host name.


Note –

NFS version 4 does not use this daemon.


NFS Commands

These commands must be run as root to be fully effective, but requests for information can be made by all users:

automount Command

This command installs autofs mount points and associates the information in the automaster files with each mount point. The syntax of the command is as follows:

automount [ -t duration ] [ -v ]

-t duration sets the time, in seconds, that a file system is to remain mounted, and -v selects the verbose mode. Running this command in the verbose mode allows for easier troubleshooting.

If not specifically set, the value for duration is set to 5 minutes. In most circumstances, this value is good. However, on systems that have many automounted file systems, you might need to increase the duration value. In particular, if a server has many users active, checking the automounted file systems every 5 minutes can be inefficient. Checking the autofs file systems every 1800 seconds, which is 30 minutes, could be more optimal. By not unmounting the file systems every 5 minutes, /etc/mnttab can become large. To reduce the output when df checks each entry in /etc/mnttab, you can filter the output from df by using the -F option (see the df(1M) man page) or by using egrep.

You should consider that adjusting the duration also changes how quickly changes to the automounter maps are reflected. Changes cannot be seen until the file system is unmounted. Refer to Modifying the Maps for instructions on how to modify automounter maps.

clear_locks Command

This command enables you to remove all file, record, and share locks for an NFS client. You must be root to run this command. From an NFS server, you can clear the locks for a specific client. From an NFS client, you can clear locks for that client on a specific server. The following example would clear the locks for the NFS client that is named tulip on the current system.


# clear_locks tulip

Using the -s option enables you to specify which NFS host to clear the locks from. You must run this option from the NFS client, which created the locks. In this situation, the locks from the client would be removed from the NFS server that is named bee.


# clear_locks -s bee

Caution – Caution –

This command should only be run when a client crashes and cannot clear its locks. To avoid data corruption problems, do not clear locks for an active client.


fsstat Command

Starting in the Solaris 10 11/06 release, the fsstat utility enables you to monitor file system operations by file system type and by mount point. Various options allow you to customize the output. See the following examples.

This example shows output for NFS version 3, version 4, and the root mount point.


% fsstat nfs3 nfs4 /
  new     name   name    attr    attr   lookup   rddir   read   read   write   write
 file    remov   chng     get     set      ops     ops    ops  bytes     ops   bytes
3.81K       90  3.65K   5.89M   11.9K    35.5M   26.6K   109K   118M   35.0K   8.16G  nfs3
  759      503    457   93.6K   1.44K     454K   8.82K  65.4K   827M     292    223K  nfs4
25.2K    18.1K  1.12K   54.7M    1017     259M   1.76M  22.4M  20.1G   1.43M   3.77G  /

This example uses the -i option to provide statistics about the I/O operations for NFS version 3, version 4, and the root mount point.


% fsstat -i nfs3 nfs4 /
 read    read    write   write   rddir   rddir   rwlock   rwulock
  ops   bytes      ops   bytes     ops   bytes      ops       ops
 109K    118M    35.0K   8.16G   26.6K   4.45M     170K      170K  nfs3
65.4K    827M      292    223K   8.82K   2.62M    74.1K     74.1K  nfs4
22.4M   20.1G    1.43M   3.77G   1.76M   3.29G    25.5M     25.5M  /

This example uses the -n option to provide statistics about the naming operations for NFS version 3, version 4, and the root mount point.


% fsstat -n nfs3 nfs4 /
lookup   creat   remov  link   renam  mkdir  rmdir   rddir  symlnk  rdlnk
 35.5M   3.79K      90     2   3.64K      5      0   26.6K      11   136K  nfs3
  454K     403     503     0     101      0      0   8.82K     356  1.20K  nfs4
  259M   25.2K   18.1K   114    1017     10      2   1.76M      12  8.23M  /

For more information, see the fsstat(1M) man page.

mount Command

With this command, you can attach a named file system, either local or remote, to a specified mount point. For more information, see the mount(1M) man page. Used without arguments, mount displays a list of file systems that are currently mounted on your computer.

Many types of file systems are included in the standard Solaris installation. Each file-system type has a specific man page that lists the options to mount that are appropriate for that file-system type. The man page for NFS file systems is mount_nfs(1M). For UFS file systems, see mount_ufs(1M).

The Solaris 7 release includes the ability to select a path name to mount from an NFS server by using an NFS URL instead of the standard server:/pathname syntax. See How to Mount an NFS File System Using an NFS URL for further information.


Caution – Caution –

The version of the mount command that is included in any Solaris release from 2.6 to the current release does not warn about invalid options. The command silently ignores any options that cannot be interpreted. Ensure that you verify all of the options that were used so that you can prevent unexpected behavior.


mount Options for NFS File Systems

The subsequent text lists some of the options that can follow the -o flag when you are mounting an NFS file system. For a complete list of options, refer to the mount_nfs(1M) man page.

bg|fg

These options can be used to select the retry behavior if a mount fails. The bg option causes the mount attempts to be run in the background. The fg option causes the mount attempt to be run in the foreground. The default is fg, which is the best selection for file systems that must be available. This option prevents further processing until the mount is complete. bg is a good selection for noncritical file systems because the client can do other processing while waiting for the mount request to be completed.

forcedirectio

This option improves performance of large sequential data transfers. Data is copied directly to a user buffer. No caching is performed in the kernel on the client. This option is off by default.

Previously, all write requests were serialized by both the NFS client and the NFS server. The NFS client has been modified to permit an application to issue concurrent writes, as well as concurrent reads and writes, to a single file. You can enable this functionality on the client by using the forcedirectio mount option. When you use this option, you are enabling this functionality for all files within the mounted file system. You could also enable this functionality on a single file on the client by using the directio() interface. Unless this functionality has been enabled, writes to files are serialized. Also, if concurrent writes or concurrent reads and writes are occurring, then POSIX semantics are no longer being supported for that file.

For an example of how to use this option, refer to Using the mount Command.

largefiles

With this option, you can access files that are larger than 2 Gbytes on a server that is running the Solaris 2.6 release. Whether a large file can be accessed can only be controlled on the server, so this option is silently ignored on NFS version 3 mounts. Starting with release 2.6, by default, all UFS file systems are mounted with largefiles. For mounts that use the NFS version 2 protocol, the largefiles option causes the mount to fail with an error.

nolargefiles

This option for UFS mounts guarantees that no large files can exist on the file system. See the mount_ufs(1M) man page. Because the existence of large files can only be controlled on the NFS server, no option for nolargefiles exists when using NFS mounts. Attempts to NFS-mount a file system by using this option are rejected with an error.

nosuid|suid

Starting in the Solaris 10 release, the nosuid option is the equivalent of specifying the nodevices option with the nosetuid option. When the nodevices option is specified, the opening of device-special files on the mounted file system is disallowed. When the nosetuid option is specified, the setuid bit and setgid bit in binary files that are located in the file system are ignored. The processes run with the privileges of the user who executes the binary file.

The suid option is the equivalent of specifying the devices option with the setuid option. When the devices option is specified, the opening of device-special files on the mounted file system is allowed. When the setuid option is specified, the setuid bit and the setgid bit in binary files that are located in the file system are honored by the kernel.

If neither option is specified, the default option is suid, which provides the default behavior of specifying the devices option with the setuid option.

The following table describes the effect of combining nosuid or suid with devices or nodevices, and setuid or nosetuid. Note that in each combination of options, the most restrictive option determines the behavior.

Behavior From the Combined Options 

Option 

Option 

Option 

The equivalent of nosetuid with nodevices

nosuid

nosetuid

nodevices

The equivalent of nosetuid with nodevices

nosuid

nosetuid

devices

The equivalent of nosetuid with nodevices

nosuid

setuid

nodevices

The equivalent of nosetuid with nodevices

nosuid

setuid

devices

The equivalent of nosetuid with nodevices

suid

nosetuid

nodevices

The equivalent of nosetuid with devices

suid

nosetuid

devices

The equivalent of setuid with nodevices

suid

setuid

nodevices

The equivalent of setuid with devices

suid

setuid

devices

The nosuid option provides additional security for NFS clients that access potentially untrusted servers. The mounting of remote file systems with this option reduces the chance of privilege escalation through importing untrusted devices or importing untrusted setuid binary files. All these options are available in all Solaris file systems.

public

This option forces the use of the public file handle when contacting the NFS server. If the public file handle is supported by the server, the mounting operation is faster because the MOUNT protocol is not used. Also, because the MOUNT protocol is not used, the public option allows mounting to occur through a firewall.

rw|ro

The -rw and -ro options indicate whether a file system is to be mounted read-write or read-only. The default is read-write, which is the appropriate option for remote home directories, mail-spooling directories, or other file systems that need to be changed by users. The read-only option is appropriate for directories that should not be changed by users. For example, shared copies of the man pages should not be writable by users.

sec=mode

You can use this option to specify the authentication mechanism to be used during the mount transaction. The value for mode can be one of the following.

  • Use krb5 for Kerberos version 5 authentication service.

  • Use krb5i for Kerberos version 5 with integrity.

  • Use krb5p for Kerberos version 5 with privacy.

  • Use none for no authentication.

  • Use dh for Diffie-Hellman (DH) authentication.

  • Use sys for standard UNIX authentication.

The modes are also defined in /etc/nfssec.conf.

soft|hard

An NFS file system that is mounted with the soft option returns an error if the server does not respond. The hard option causes the mount to continue to retry until the server responds. The default is hard, which should be used for most file systems. Applications frequently do not check return values from soft-mounted file systems, which can make the application fail or can lead to corrupted files. If the application does check the return values, routing problems and other conditions can still confuse the application or lead to file corruption if the soft option is used. In most situations, the soft option should not be used. If a file system is mounted by using the hard option and becomes unavailable, an application that uses this file system hangs until the file system becomes available.

Using the mount Command

Refer to the following examples.

umount Command

This command enables you to remove a remote file system that is currently mounted. The umount command supports the -V option to allow for testing. You might also use the -a option to unmount several file systems at one time. If mount-points are included with the -a option, those file systems are unmounted. If no mount points are included, an attempt is made to unmount all file systems that are listed in /etc/mnttab except for the “required” file systems, such as /, /usr, /var, /proc, /dev/fd, and /tmp. Because the file system is already mounted and should have an entry in /etc/mnttab, you do not need to include a flag for the file-system type.

The -f option forces a busy file system to be unmounted. You can use this option to unhang a client that is hung while trying to mount an unmountable file system.


Caution – Caution –

By forcing an unmount of a file system, you can cause data loss if files are being written to.


See the following examples.


Example 6–1 Unmounting a File System

This example unmounts a file system that is mounted on /usr/man:


# umount /usr/man


Example 6–2 Using Options with umount

This example displays the results of running umount -a -V:


# umount -a -V
umount /home/kathys
umount /opt
umount /home
umount /net

Notice that this command does not actually unmount the file systems.


mountall Command

Use this command to mount all file systems or a specific group of file systems that are listed in a file-system table. The command provides a way of doing the following:

Because all file systems that are labeled as NFS file-system type are remote file systems, some of these options are redundant. For more information, see the mountall(1M) man page.

Note that the following two examples of user input are equivalent:


# mountall -F nfs

# mountall -F nfs -r

umountall Command

Use this command to unmount a group of file systems. The -k option runs the fuser -k mount-point command to kill any processes that are associated with the mount-point. The -s option indicates that unmount is not to be performed in parallel. -l specifies that only local file systems are to be used, and -r specifies that only remote file systems are to be used. The -h host option indicates that all file systems from the named host should be unmounted. You cannot combine the -h option with -l or -r.

The following is an example of unmounting all file systems that are mounted from remote hosts:


# umountall -r

The following is an example of unmounting all file systems that are currently mounted from the server bee:


# umountall -h bee

sharemgr Command

The Solaris Express, Developer Edition 2/07 release includes the sharemgr utility, which is an administrative tool that provides an enhanced method of sharing files and performing related tasks. Previously, such tasks were accomplished by adding entries to the /etc/dfs/dfstab file and using the share command to create a temporary share. You made the share permanent by rebooting the system or using the shareall command. Related administrative tasks necessitated that you manually edit configuration files. The sharemgr utility simplifies this process by introducing two concepts:

The sharemgr utility accomplishes different tasks by using subcommands. Options and their related properties can be used with each subcommand. The utility uses the following syntax:


# sharemgr [subcommand] [option] [share_group]

Note –

The sharemgr utility provides a unique way of checking the validity of a desired configuration. The -n option allows you to test the validity of the options and properties you want to use with a specific subcommand. The test does not change your configuration. For example, if you use the -n option with the subcommand create, no share group is created.


The following tables describes the subcommands supported by the sharemgr utility.

Table 6–2 Subcommands Supported by sharemgr

Subcommand 

Description 

create

Makes (or creates) a new share group 

delete

Removes a share group 

list

Lists the current share groups 

show

Lists the shares by share group 

set

Sets a share group's properties, including the group's security property 

unset

Removes (or unsets) properties from a share group 

add-share

Adds a new share to a share group 

move-share

Moves a share from one share group to another 

remove-share

Removes a share from a share group 

set-share

Updates the properties associated with a share 

disable

Unshares one or more share groups 

enable

Shares one or more share groups 

start

Used by the smf utility to share one or more share groups

stop

Used by the smf utility to unshare one or more share groups

-h

Provides online-help descriptions of sharemgr and its subcommands and options, and shows the syntax to use

The following table describes the properties supported by the sharemgr utility.

Table 6–3 Properties Supported by sharemgr Utility

Property 

Value 

Description 

aclok

boolean 

Enable access control lists (ACL) for NFS version 2. 

anon

UID 

Specify the User ID for unknown users. 

index

file path 

Include the specified file in a content list for a directory. 

log

tag 

Specify a tag for NFS server logging. Note that these tags are defined in the /etc/nfs/nfslog.conf file.

nosub

boolean 

Disallow clients from mounting subdirectories of shares. 

nosuid

boolean 

Disallow the use of the setuid() and setgid() functions.

public

boolean 

Move the location of a public file handle from root to an exported directory for WebNFS-enabled browsers and clients. Only one file system (or share) on each server can use this property. This property is not accepted by a share group. For more information, see share_nfs(1M) man page.

ro

access-list, boolean, or an asterisk (*)

If the property is set to an access-list, permissions for the list are set to read-only. For information about access lists, see the share_nfs(1M) man page.

If you set the property to no value or true, permissions for the share group are set to read-only.

If you set the property to an asterisk (*), permissions for all hosts are set to read-only.


Note –

To use this property, you must use the -S option to set the security mode. For more information about setting a security mode, see set Subcommand.


root

access-list or an asterisk (*)

If the property is set to an access-list, permissions for the list are set to root access. For information about access-lists, see the share_nfs(1M) man page.

If you set the property to an asterisk (*), all hosts have root access.


Note –

To use this property, you must use the -S option to set the security mode. For more information about setting a security mode, see set Subcommand.


rw

access-list, boolean, or an asterisk (*)

If the property is set to an access-list, permissions for the list are set to read-write. For information about access-lists, see the share_nfs(1M) man page.

If you set the property to no value or true, permissions for the share group are set to read-write.

If you set the property to an asterisk (*), permissions for all hosts are set to read-write.


Note –

To use this property, you must use the -S option to set the security mode. For more information about setting a security mode, see set Subcommand.


window

integer 

For the dh security mode, set the number of seconds a credential is available.


Note –

To use this property, you must use the -S option to set the security mode. For more information about setting a security mode, see set Subcommand.



Note –

sharemgr and sharectl are the preferred utilities for managing your file systems and file-sharing protocols.


For procedures that use the sharemgr utility, see the following:

Also, see the sharemgr(1M) man page.

The sharectl utility is an administrative tool that enables you to configure and manage file-sharing protocols, such as NFS. For more information, see the following:

The sections that follow describe each subcommand for sharemgr and provide examples.

create Subcommand

The create subcommand makes (or creates) a share group. After you create a share group, use the add-share subcommand to add shares to the group. Note the following:

This subcommand supports the following options:

-n

Checks the validity of a desired configuration.

-P

Specifies a file-system type. The default is NFS.

-p

Specifies a property for the new share group.

-h

Provides an online-help description.

The create subcommand uses the following syntax:


# sharemgr create [-h] [-n] [-P protocol] [-p property=value] share_group

The following example creates my_group with the following parameters:


example# sharemgr create -P nfs -p rw=true -p nosuid=true my_group

delete Subcommand

This subcommand removes a share group. Before using this option, use the remove-share subcommand to delete all shares from the group. Alternately, use the -f option with this subcommand to force the removal of a group that might still contain shares. The -f option unshares and removes all shares from the share group, so the share group can be removed.

To remove a protocol from a share group, use the -P option. Note that when using the -P option, the share group is not removed. Only the protocol is removed from the group.

This subcommand supports the following options:

-n

Checks the validity of command-line string

-f

Forces a share group to be removed

-P

Specifies a file-system type to be removed from the group

-h

Provides an online-help description

This subcommand uses the following syntax:


# sharemgr delete [-h] [-n] [-f] [-P protocol] share_group

list Subcommand

This subcommand provides a list of current share groups. You can customize the output by using various options with this subcommand.

This subcommand supports the following options:

-P

Enables you to see a list of groups that use a specific file-system type.

-v

Is the verbose option and provides the following:

  • Group name

  • Status of the group, specifically whether the group is enabled or disabled

  • File-system type used by the group

-h

Provides an online-help description.

This subcommand uses the following syntax:


# sharemgr list [-h] [-P protocol] [-v]

The following example shows the output for the -v option:


example# sharemgr list -v
group01   enabled    nfs
group02   disabled   nfs

show Subcommand

This subcommand provides a list of shares by group. By specifying one or more share groups, you can limit the output to a list of shares in the specified groups. If no groups are specified, the list shows the shares in each group.

This subcommand supports the following options:

-p

Shows you the properties assigned to each group.

-v

Is the verbose option. If included, the verbose option provides the resource name and descriptions of each share.

-x

Creates an XML file for the output. Because this option automatically includes the information you would get from the -p and -v options, no other options are needed when you use this option.

-h

Provides an online-help description.

This subcommand uses the following syntax:


# sharemgr show [-h] [-v] [-p] [-x] [share_group...]

The following example uses the -p option to show the shares and group properties for my_group:


example01# sharemgr show -p my_group
my_group	nfs=(rw=true nosuid=true)
	/export/home/home0
	/export/home/home1

The next example uses the -v option to show the shares in my_group and their descriptions:


example02# sharemgr show -v my_group
my_group
	HOME0=/export/home/home0	"Home directory set 0"
	HOME1=/export/home/home1	"Home directory set 1"

set Subcommand

This subcommand sets properties to a share group. Note the following conditions:

A group can be associated with more than one protocol and can have different properties for each protocol.

This subcommand supports the following options:

-n

Checks the validity of the command-line string.

-P

Specifies a file-system type.

-p

Specifies a property for the share group.

-S

Specifies the security mode, such as sys, dh, or krb5. For more information about security modes, see the nfssec(5) man page.

-s

Specifies the path to the share, which is a file or a directory.

-h

Provides an online-help description.

This subcommand uses the following syntax:


# sharemgr set [-h] [-n] [-P protocol] [-s share-path] [-S security-mode] [-p property=value] share_group

The following example sets the user ID for unknown users in my_group to 1234546:


example01# sharemgr set -p anon=123456 my_group

In the next example, the following occurs:


example02# sharemgr create -P nfs -p rw=true -p nosuid=true my_group	
example02# sharemgr set -P nfs -p nosuid=false my_group

unset Subcommand

This subcommand removes (or unsets) properties from a share group.

This subcommand supports the following options:

-n

Checks the validity of the command-line string

-P

Specifies the protocol associated with the properties being removed from the group

-p

Specifies the property to be removed from the share group

-s

Specifies the path to the share, which is a file or a directory.

-h

Provides an online-help description

This subcommand uses the following syntax:


# sharemgr unset [-h] [-n] -P protocol [-s share-path] [-p property] share_group

add-share Subcommand

After creating a share group, use this subcommand to add shares to the group. A share is a path to a file or a directory. Note that a share can exist in one group only. If you try to add a share to another group, you will get an error message.

This subcommand supports the following options:

-n

Checks the validity of the command-line string.

-s

Specifies the path to the share, which is a file or a directory.

-t

Specifies that the share is transient. Transient shares are automatically removed from the group when you reboot the system or use the disable or stop subcommands.

-d

Adds descriptive text about the share.

-r

Assigns the share a resource name that identifies the share. Note that the resource name only uses alphanumeric characters, the hyphen (-), and the underscore (_). The first character in the name must be alphabetic.

-h

Provides an online-help description.

This subcommand uses the following syntax:


# sharemgr add-share [-h] [-n] -s share-path [-t] [-d description] [-r resource-name] share_group

The following example adds the shares /export/home/home0 and /export/home/home1 to my_group.


example# sharemgr add-share -s /export/home/home0 my_group
example# sharemgr add-share -s /export/home/home1 my_group

move-share Subcommand

Use this subcommand to move a share from one group to another.

This subcommand supports the following options:

-n

Checks the validity of the command-line string

-s

Specifies the path to the share, which is a file or a directory

-h

Provides an online-help description

This subcommand uses the following syntax:


# sharemgr move-share [-h] [-n] -s share-path share_group

The following example shows a share that was added to my_group and then moved to your_group.


example# sharemgr add-share  -s /export/home/home0 my_group
example# sharemgr move-share -s /export/home/home0 your_group

remove-share Subcommand

Use this subcommand to remove a share from a share group.

This subcommand supports the following options:

-n

Checks the validity of the command-line string

-s

Specifies the path to the share, which is a file or a directory

-h

Provides an online-help description

This subcommand uses the following syntax:


# sharemgr remove-share [-h] [-n] -s share-path share_group

The following example removes the share /export/home/home0 from my_group.


example# sharemgr remove-share -s /export/home/home0 my_group

set-share Subcommand

Use this subcommand to change the properties associated with a share. Currently, you can use this subcommand to change the descriptive text associated with a specific share.

This subcommand supports the following options:

-n

Checks the validity of the command-line string.

-s

Specifies the path to the share, which is a file or a directory.

-d

Adds descriptive text about the share.

-r

Assigns the share a resource name that identifies the share. Note that the resource name only uses alphanumeric characters, the hyphen (-), and the underscore (_). The first character in the name must be alphabetic.

-h

Provides an online-help description.

This subcommand uses the following syntax:


# sharemgr set-share [-h] [-n] -s share-path [-d description] [-r resource-name] share_group

The following example shows how a description is changed.


example# sharemgr add-share  -s /export/home/home0 -d "original text" my_group
example# sharemgr set-share  -s /export/home/home0 -d "new text" my_group

enable Subcommand

Use this subcommand to share (or enable) the shares in the groups that you specify. Note that the groups you create are enabled by default. You must use this subcommand to enable a group that has previously been disabled with the disable subcommand.


Note –

If you specify a protocol, only the groups that are associated with that protocol are enabled.


This subcommand supports the following options:

-a

Specifies all groups

-n

Checks the validity of the command-line string

-P

Specifies a file-system type

-h

Provides an online-help description

This subcommand uses the following syntax:


# sharemgr enable [-h] [-n] [-P protocol] [share_group | -a]

The following example shares (or enables) the shares in all groups that use NFS.


example01# sharemgr enable -P NFS -a

In this next example, all shares in my_group are shared (or enabled).


example02# sharemgr enable my_group

disable Subcommand

Use this subcommand to unshare (or disable) the shares in the groups that you specify. This subcommand can be reversed by using the enable subcommand.


Note –

If you specify a protocol, only the groups that are associated with that protocol are disabled.


This subcommand supports the following options:

-a

Specifies all groups

-n

Checks the validity of the command-line string

-P

Specifies a file-system type

-h

Provides an online-help description

This subcommand uses the following syntax:


# sharemgr disable [-h] [-P protocol] [share_group | -a]

The following example unshares (or disables) the shares in all groups that use NFS.


example01# sharemgr disable -P NFS -a

In this next example, all shares in my_group are unshared (or disabled).


example02# sharemgr disable my_group

start Subcommand

This subcommand is similar to the enable subcommand with these distinctions:

This subcommand supports the following options:

-a

Specifies all groups

-h

Provides an online-help description

Table 6–4 Two Ways to Start a Share Group

Using the start Subcommand

Using the svcadm Command

The start subcommand uses the following syntax:


# sharemgr start [-h] [share-group | -a]

The following example enables the shares in all groups to start sharing. 


# sharemgr start -a

The svcadm command uses the following syntax:


# svcadm start network/shares/group:share-group

The following example enables the shares in my-group to start sharing.


# svcadm start network/shares/group:my-group

stop Subcommand

This subcommand is similar to the disable subcommand with these distinctions:

This subcommand supports the following options:

-a

Specifies all groups

-h

Provides an online-help description

Table 6–5 Two Ways to Stop a Share Group

Using the stop Subcommand

Using the svcadm Command

The stop subcommand uses the following syntax:


# sharemgr stop [-h] [share-group | -a]

The following example unshares the shares in all groups. 


# sharemgr stop -a

The svcadm command uses the following syntax:


# svcadm stop network/shares/group:share-group

The following example unshares the shares in my-group.


# svcadm stop network/shares/group:my-group

-h Feature

The sharemgr utility has an online help feature that describes sharemgr and its subcommands and options, and shows proper syntax. For online help, use -h.

This feature uses the following syntax:


# sharemgr [subcommand] -h

The following example uses -h to provide a complete description of the sharemgr utility.


example01# sharemgr -h
USAGE: # sharemgr [subcommand] [option] [share_group]

DESCRIPTION: Configures and manages file sharing

SUBCOMMANDS:
create        Makes (or creates) a new share group
delete        Removes a share group
list          Lists the current share groups
show          Lists the shares by share group
set           Sets a share group's properties
unset         Removes (or unsets) properties from a share group
add-share     Adds a new share to a share group
move-share    Moves a share from one share group to another
remove-share  Removes a share from a share group
set-share     Updates the properties associated with a share
disable       Unshares one or more share groups
enable        Shares one or more share groups
start         Used by the smf utility to share one or more share groups
stop          Used by the smf utility to unshare one or more share groups
-h            Online-help feature

SEE ALSO:
sharemgr(1M) man page
System Administration Guide: Network Services

This next example uses -h to provide information about the set subcommand.


example02# sharemgr set -h
USAGE: # sharemgr set [-h] [-n] [-P protocol] [-S security-mode] [-p property=value] share_group

DESCRIPTION: Sets a share group's properties

OPTIONS:
-h  Online-help feature
-n  Checks the validity of the command-line string
-P  Specifies a file-system type.
-p  Specifies a property for the share group.
-S  Specifies the security mode, such as sys, dh, or krb5

PROPERTIES:
aclok={true|false}        Enable access control lists (ACL) for NFS version 2.
anon=UID                  Specify the User ID for unknown users.
index=file path           Include the specified file in a content list for a directory.
log=tag                   Specify a tag to use for log messages.
nosub={true|false}        Disallow clients from mounting subdirectories of shares.
nosuid={true|false}       Disallow the use of the setuid() and setgid() functions.
ro={access-list|true|*}   If ro is set to an access-list, permissions for the list
                          are read-only. If ro is set to no value or to true, 
                          permissions for the share group are read-only. If ro
                          is set to an asterisk (*), permissions for all hosts
                          are set to read-only.
root={access-list|*}      If root  is set to an access-list, permissions for the
                          list are set to allow root access. If root is set to
                          an asterisk (*), all hosts have root access.
rw={access-list|true|*}   If rw is set to an access list, permissions for the list
                          are set to read-write. If rw is set to no value or to 
                          true, permissions for the share group are set to 
                          read-write. If rw is set to an asterisk (*), permissions
                          for all hosts are set to read-write.
window=integer            For the dh security mode, set the number of seconds a
                          credential is available.

SEE ALSO:
sharemgr(1M) man page
System Administration Guide: Network Services
nfssec(5) man page for more information about security modes

For help, you can also refer to the sharemgr(1M) man page.

sharectl Command

The Solaris Express, Developer Edition 2/07 release includes the sharectl utility, which is an administrative tool that enables you to configure and manage file-sharing protocols, such as NFS. You can use this command to do the following:

The sharectl utility uses the following syntax:


# sharectl subcommand [option] [protocol]

The sharectl utility supports the following subcommands:

Table 6–6 Subcommands for sharectl Utility

Subcommand 

Description 

set

Defines the properties for a file-sharing protocol. For a list of properties and property values, see the parameters described in the nfs(4) man page.

get

Displays the properties and property values for the specified protocol. 

status

Displays whether the specified protocol is enabled or disabled. If no protocol is specified, the status of all file-sharing protocols is displayed. 


Note –

sharemgr and sharectl are the preferred utilities for managing your file systems and file-sharing protocols.


For more information about the sharectl utility, see the following:

For information about the sharemgr utility, see the following:

set Subcommand

The set subcommand, which defines the properties for a file-sharing protocol, supports the following options:

-h

Provides an online-help description

-p

Defines a property for the protocol

The set subcommand uses the following syntax:


# sharectl set [-h] [-p property=value] protocol

Note –

The following:


The following example sets the minimum version of the NFS protocol for the client to 3:


# sharectl set -p nfs_client_versmin=3 nfs

get Subcommand

The get subcommand, which displays the properties and property values for the specified protocol, supports the following options:

-h

Provides an online-help description.

-p

Identifies the property value for the specified property. If the -p option is not used, all property values are displayed.

The get subcommand uses the following syntax:


# sharectl get [-h] [-p property] protocol

Note –

You must have root privileges to use the get subcommand.


The following example uses nfsd_servers, which is the property that enables you to specify the maximum number of concurrent NFS requests:


# sharectl get -p nfsd_servers nfs
nfsd_servers=16

In the following example, because the -p option is not used, all property values are displayed:


# sharectl get nfs
listen_backlog=32
protocol=ALL
servers=32
lockd_listen_backlog=32
lockd_servers=20
lockd_retransmit_timeout=5
grace_period=90
nfsmapid_domain=company.com
server_versmin=2
server_versmax=4
client_versmin=2
client_versmax=4
max_connections=-1

status Subcommand

The status subcommand, which displays whether the specified protocol is enabled or disabled, supports the following option:

-h

Provides an online-help description

The status subcommand uses the following syntax:


# sharectl status [-h] [protocol]

The following example shows the status of the NFS protocol:


# sharectl status nfs
nfs	   enabled

share Command


Note –

Starting with the Solaris Express, Developer Edition 2/07 release, sharemgr and sharectl are the preferred utilities for managing your file systems and file-sharing protocols. See sharemgr Command and sharectl Command


With this command, you can make a local file system on an NFS server available for mounting. You can also use the share command to display a list of the file systems on your system that are currently shared. The NFS server must be running for the share command to work. The NFS server software is started automatically during boot if an entry is in /etc/dfs/dfstab. The command does not report an error if the NFS server software is not running, so you must verify that the software is running.

The objects that can be shared include any directory tree. However, each file system hierarchy is limited by the disk slice or partition that the file system is located on. For instance, sharing the root (/) file system would not also share /usr, unless these directories are on the same disk partition or slice. Normal installation places root on slice 0 and /usr on slice 6. Also, sharing /usr would not share any other local disk partitions that are mounted on subdirectories of /usr.

A file system cannot be shared if that file system is part of a larger file system that is already being shared. For example, if /usr and /usr/local are on one disk slice, /usr can be shared or /usr/local can be shared. However, if both directories need to be shared with different share options, /usr/local must be moved to a separate disk slice.

You can gain access to a file system that is read-only shared through the file handle of a file system that is read-write shared. However, the two file systems have to be on the same disk slice. You can create a more secure situation. Place those file systems that need to be read-write on a separate partition or separate disk slice from the file systems that you need to share as read-only.


Note –

For information about how NFS version 4 functions when a file system is unshared and then reshared, refer to Unsharing and Resharing a File System in NFS Version 4.


Non-File-System-Specific share Options

Some of the options that you can include with the -o flag are as follows.

rw|ro

The pathname file system is shared read-write or read-only for all clients.

rw=accesslist

The file system is shared read-write for the clients that are listed only. All other requests are denied. Starting with the Solaris 2.6 release, the list of clients that are defined in accesslist has been expanded. See Setting Access Lists With the share Command for more information. You can use this option to override an -ro option.

NFS-Specific share Options

The options that you can use with NFS file systems include the following.

aclok

This option enables an NFS server that supports the NFS version 2 protocol to be configured to do access control for NFS version 2 clients. Without this option, all clients are given minimal access. With this option, the clients have maximal access. For instance, on file systems that are shared with the -aclok option, if anyone has read permissions, everyone does. However, without this option, you can deny access to a client who should have access permissions. A decision to permit too much access or too little access depends on the security systems already in place. See Using Access Control Lists to Protect Files in System Administration Guide: Security Services for more information about access control lists (ACLs).


Note –

To use ACLs, ensure that clients and servers run software that supports the NFS version 3 and NFS_ACL protocols. If the software only supports the NFS version 3 protocol, clients obtain correct access but cannot manipulate the ACLs. If the software supports the NFS_ACL protocol, the clients obtain correct access and can manipulate the ACLs. Starting with the Solaris 2.5 release, the Solaris system supports both protocols.


anon=uid

You use uid to select the user ID of unauthenticated users. If you set uid to -1, the server denies access to unauthenticated users. You can grant root access by setting anon=0, but this option allows unauthenticated users to have root access, so use the root option instead.

index=filename

When a user accesses an NFS URL, the -index=filename option forces the HTML file to load, instead of displaying a list of the directory. This option mimics the action of current browsers if an index.html file is found in the directory that the HTTP URL is accessing. This option is the equivalent of setting the DirectoryIndex option for httpd. For instance, suppose that the dfstab file entry resembles the following:


share -F nfs -o ro,public,index=index.html /export/web

These URLs then display the same information:


nfs://<server>/<dir>
nfs://<server>/<dir>/index.html
nfs://<server>//export/web/<dir>
nfs://<server>//export/web/<dir>/index.html
http://<server>/<dir>
http://<server>/<dir>/index.html
log=tag

This option specifies the tag in /etc/nfs/nfslog.conf that contains the NFS server logging configuration information for a file system. This option must be selected to enable NFS server logging.

nosuid

This option signals that all attempts to enable the setuid or setgid mode should be ignored. NFS clients cannot create files with the setuid or setgid bits on.

public

The -public option has been added to the share command to enable WebNFS browsing. Only one file system on a server can be shared with this option.

root=accesslist

The server gives root access to the hosts in the list. By default, the server does not give root access to any remote hosts. If the selected security mode is anything other than -sec=sys, you can only include client host names in the accesslist. Starting with the Solaris 2.6 release, the list of clients that are defined in accesslist is expanded. See Setting Access Lists With the share Command for more information.


Caution – Caution –

Granting root access to other hosts has wide security implications. Use the -root= option with extreme caution.


root=client-name

The client-name value is used with AUTH_SYS authentication to check the client's IP address against a list of addresses provided by exportfs(1B). If a match is found, root access is given to the file systems being shared.

root=host-name

For secure NFS modes, such as AUTH_SYS or RPCSEC_GSS, the server checks the clients' principal names against a list of host-based principal names that are derived from an access list. The generic syntax for the client's principal name is root@hostname. For Kerberos V the syntax is root/hostname.fully.qualified@REALM. When you use the host-name value, the clients on the access list must have the credentials for a principal name. For Kerberos V, the client must have a valid keytab entry for its root/hostname.fully.qualified@REALM principal name. For more information, see Configuring Kerberos Clients in System Administration Guide: Security Services.

sec=mode[:mode]

mode selects the security modes that are needed to obtain access to the file system. By default, the security mode is UNIX authentication. You can specify multiple modes, but use each security mode only once per command line. Each -mode option applies to any subsequent -rw, -ro, -rw=, -ro=, -root=, and -window= options until another -mode is encountered. The use of -sec=none maps all users to user nobody.

window=value

value selects the maximum lifetime in seconds of a credential on the NFS server. The default value is 30000 seconds or 8.3 hours.

Setting Access Lists With the share Command

In Solaris releases prior to 2.6, the accesslist that was included with either the -ro=, -rw=, or -root= option of the share command was restricted to a list of host names or netgroup names. Starting with the Solaris 2.6 release, the access list can also include a domain name, a subnet number, or an entry to deny access. These extensions should simplify file access control on a single server without having to change the namespace or maintain long lists of clients.

This command provides read-only access for most systems but allows read-write access for rose and lilac:


# share -F nfs -o ro,rw=rose:lilac /usr/src

In the next example, read-only access is assigned to any host in the eng netgroup. The client rose is specifically given read-write access.


# share -F nfs -o ro=eng,rw=rose /usr/src

Note –

You cannot specify both rw and ro without arguments. If no read-write option is specified, the default is read-write for all clients.


To share one file system with multiple clients, you must type all options on the same line. Multiple invocations of the share command on the same object “remember” only the last command that is run. This command enables read-write access to three client systems, but only rose and tulip are given access to the file system as root.


# share -F nfs -o rw=rose:lilac:tulip,root=rose:tulip /usr/src

When sharing a file system that uses multiple authentication mechanisms, ensure that you include the -ro, -ro=, -rw, -rw=, -root, and -window options after the correct security modes. In this example, UNIX authentication is selected for all hosts in the netgroup that is named eng. These hosts can only mount the file system in read-only mode. The hosts tulip and lilac can mount the file system read-write if these hosts use Diffie-Hellman authentication. With these options, tulip and lilac can mount the file system read-only even if these hosts are not using DH authentication. However, the host names must be listed in the eng netgroup.


# share -F nfs -o sec=dh,rw=tulip:lilac,sec=sys,ro=eng /usr/src

Even though UNIX authentication is the default security mode, UNIX authentication is not included if the -sec option is used. Therefore, you must include a -sec=sys option if UNIX authentication is to be used with any other authentication mechanism.

You can use a DNS domain name in the access list by preceding the actual domain name with a dot. The string that follows the dot is a domain name, not a fully qualified host name. The following entry allows mount access to all hosts in the eng.example.com domain:


# share -F nfs -o ro=.:.eng.example.com /export/share/man

In this example, the single “.” matches all hosts that are matched through the NIS or NIS+ namespaces. The results that are returned from these name services do not include the domain name. The “.eng.example.com” entry matches all hosts that use DNS for namespace resolution. DNS always returns a fully qualified host name. So, the longer entry is required if you use a combination of DNS and the other namespaces.

You can use a subnet number in an access list by preceding the actual network number or the network name with “@”. This character differentiates the network name from a netgroup or a fully qualified host name. You must identify the subnet in either /etc/networks or in an NIS or NIS+ namespace. The following entries have the same effect if the 192.168 subnet has been identified as the eng network:


# share -F nfs -o ro=@eng /export/share/man
# share -F nfs -o ro=@192.168 /export/share/man
# share -F nfs -o ro=@192.168.0.0 /export/share/man

The last two entries show that you do not need to include the full network address.

If the network prefix is not byte aligned, as with Classless Inter-Domain Routing (CIDR), the mask length can be explicitly specified on the command line. The mask length is defined by following either the network name or the network number with a slash and the number of significant bits in the prefix of the address. For example:


# share -f nfs -o ro=@eng/17 /export/share/man
# share -F nfs -o ro=@192.168.0/17 /export/share/man

In these examples, the “/17” indicates that the first 17 bits in the address are to be used as the mask. For additional information about CIDR, look up RFC 1519.

You can also select negative access by placing a “-” before the entry. Note that the entries are read from left to right. Therefore, you must place the negative access entries before the entry that the negative access entries apply to:


# share -F nfs -o ro=-rose:.eng.example.com /export/share/man

This example would allow access to any hosts in the eng.example.com domain except the host that is named rose.

unshare Command

This command allows you to make a previously available file system unavailable for mounting by clients. You can use the unshare command to unshare any file system, whether the file system was shared explicitly with the share command or automatically through /etc/dfs/dfstab. If you use the unshare command to unshare a file system that you shared through the dfstab file, be careful. Remember that the file system is shared again when you exit and reenter run level 3. You must remove the entry for this file system from the dfstab file if the change is to continue.

When you unshare an NFS file system, access from clients with existing mounts is inhibited. The file system might still be mounted on the client, but the files are not accessible.


Note –

For information about how NFS version 4 functions when a file system is unshared and then reshared, refer to Unsharing and Resharing a File System in NFS Version 4.


The following is an example of unsharing a specific file system:


# unshare /usr/src

shareall Command

This command allows for multiple file systems to be shared. When used with no options, the command shares all entries in /etc/dfs/dfstab. You can include a file name to specify the name of a file that lists share command lines. If you do not include a file name, /etc/dfs/dfstab is checked. If you use a “-” to replace the file name, you can type share commands from standard input.

The following is an example of sharing all file systems that are listed in a local file:


# shareall /etc/dfs/special_dfstab

unshareall Command

This command makes all currently shared resources unavailable. The -F FSType option selects a list of file-system types that are defined in /etc/dfs/fstypes. This flag enables you to choose only certain types of file systems to be unshared. The default file-system type is defined in /etc/dfs/fstypes. To choose specific file systems, use the unshare command.

The following is an example of unsharing all NFS-type file systems:


# unshareall -F nfs

showmount Command

This command displays one of the following:


Note –

The showmount command only shows NFS version 2 and version 3 exports. This command does not show NFS version 4 exports.


The command syntax is as follows:

showmount [ -ade ] [ hostname ]

-a

Prints a list of all the remote mounts. Each entry includes the client name and the directory.

-d

Prints a list of the directories that are remotely mounted by clients.

-e

Prints a list of the files that are shared or are exported.

hostname

Selects the NFS server to gather the information from.

If hostname is not specified, the local host is queried.

The following command lists all clients and the local directories that the clients have mounted:


# showmount -a bee
lilac:/export/share/man
lilac:/usr/src
rose:/usr/src
tulip:/export/share/man

The following command lists the directories that have been mounted:


# showmount -d bee
/export/share/man
/usr/src

The following command lists file systems that have been shared:


# showmount -e bee
/usr/src								(everyone)
/export/share/man					eng

setmnt Command

This command creates an /etc/mnttab table. The mount and umount commands consult the table. Generally, you do not have to run this command manually, as this command runs automatically when a system is booted.

Commands for Troubleshooting NFS Problems

These commands can be useful when troubleshooting NFS problems.

nfsstat Command

You can use this command to gather statistical information about NFS and RPC connections. The syntax of the command is as follows:

nfsstat [ -cmnrsz ]

-c

Displays client-side information

-m

Displays statistics for each NFS-mounted file system

-n

Specifies that NFS information is to be displayed on both the client side and the server side

-r

Displays RPC statistics

-s

Displays the server-side information

-z

Specifies that the statistics should be set to zero

If no options are supplied on the command line, the -cnrs options are used.

Gathering server-side statistics can be important for debugging problems when new software or new hardware is added to the computing environment. Running this command a minimum of once a week, and storing the numbers, provides a good history of previous performance.

Refer to the following example:


# nfsstat -s

Server rpc:
Connection oriented:
calls      badcalls   nullrecv   badlen     xdrcall    dupchecks  dupreqs    
719949194  0          0          0          0          58478624   33         
Connectionless:
calls      badcalls   nullrecv   badlen     xdrcall    dupchecks  dupreqs    
73753609   0          0          0          0          987278     7254       

Server nfs:
calls                badcalls             
787783794            3516                 
Version 2: (746607 calls)
null       getattr    setattr    root       lookup     readlink   read       
883 0%     60 0%      45 0%      0 0%       177446 23% 1489 0%    537366 71% 
wrcache    write      create     remove     rename     link       symlink    
0 0%       1105 0%    47 0%      59 0%      28 0%      10 0%      9 0%       
mkdir      rmdir      readdir    statfs     
26 0%      0 0%       27926 3%   108 0%     
Version 3: (728863853 calls)
null          getattr       setattr       lookup        access        
1365467 0%    496667075 68% 8864191 1%    66510206 9%   19131659 2%   
readlink      read          write         create        mkdir         
414705 0%     80123469 10%  18740690 2%   4135195 0%    327059 0%     
symlink       mknod         remove        rmdir         rename        
101415 0%     9605 0%       6533288 0%    111810 0%     366267 0%     
link          readdir       readdirplus   fsstat        fsinfo        
2572965 0%    519346 0%     2726631 0%    13320640 1%   60161 0%      
pathconf      commit        
13181 0%      6248828 0%    
Version 4: (54871870 calls)
null                compound            
266963 0%           54604907 99%        
Version 4: (167573814 operations)
reserved            access              close               commit              
0 0%                2663957 1%          2692328 1%          1166001 0%          
create              delegpurge          delegreturn         getattr             
167423 0%           0 0%                1802019 1%          26405254 15%        
getfh               link                lock                lockt               
11534581 6%         113212 0%           207723 0%           265 0%              
locku               lookup              lookupp             nverify             
230430 0%           11059722 6%         423514 0%           21386866 12%        
open                openattr            open_confirm        open_downgrade      
2835459 1%          4138 0%             18959 0%            3106 0%             
putfh               putpubfh            putrootfh           read                
52606920 31%        0 0%                35776 0%            4325432 2%          
readdir             readlink            remove              rename              
606651 0%           38043 0%            560797 0%           248990 0%           
renew               restorefh           savefh              secinfo             
2330092 1%          8711358 5%          11639329 6%         19384 0%            
setattr             setclientid         setclientid_confirm verify              
453126 0%           16349 0%            16356 0%            2484 0%             
write               release_lockowner   illegal             
3247770 1%          0 0%                0 0%                

Server nfs_acl:
Version 2: (694979 calls)
null        getacl      setacl      getattr     access      getxattrdir 
0 0%        42358 6%    0 0%        584553 84%  68068 9%    0 0%        
Version 3: (2465011 calls)
null        getacl      setacl      getxattrdir 
0 0%        1293312 52% 1131 0%     1170568 47% 

The previous listing is an example of NFS server statistics. The first five lines relate to RPC and the remaining lines report NFS activities. In both sets of statistics, knowing the average number of badcalls or calls and the number of calls per week can help identify a problem. The badcalls value reports the number of bad messages from a client. This value can indicate network hardware problems.

Some of the connections generate write activity on the disks. A sudden increase in these statistics could indicate trouble and should be investigated. For NFS version 2 statistics, the connections to note are setattr, write, create, remove, rename, link, symlink, mkdir, and rmdir. For NFS version 3 and version 4 statistics, the value to watch is commit. If the commit level is high in one NFS server, compared to another almost identical server, check that the NFS clients have enough memory. The number of commit operations on the server grows when clients do not have available resources.

pstack Command

This command displays a stack trace for each process. The pstack command must be run by the owner of the process or by root. You can use pstack to determine where a process is hung. The only option that is allowed with this command is the PID of the process that you want to check. See the proc(1) man page.

The following example is checking the nfsd process that is running.


# /usr/bin/pgrep nfsd
243
# /usr/bin/pstack 243
243:    /usr/lib/nfs/nfsd -a 16
 ef675c04 poll     (24d50, 2, ffffffff)
 000115dc ???????? (24000, 132c4, 276d8, 1329c, 276d8, 0)
 00011390 main     (3, efffff14, 0, 0, ffffffff, 400) + 3c8
 00010fb0 _start   (0, 0, 0, 0, 0, 0) + 5c

The example shows that the process is waiting for a new connection request, which is a normal response. If the stack shows that the process is still in poll after a request is made, the process might be hung. Follow the instructions in How to Restart NFS Services to fix this problem. Review the instructions in NFS Troubleshooting Procedures to fully verify that your problem is a hung program.

rpcinfo Command

This command generates information about the RPC service that is running on a system. You can also use this command to change the RPC service. Many options are available with this command. See the rpcinfo(1M) man page. The following is a shortened synopsis for some of the options that you can use with the command.

rpcinfo [ -m | -s ] [ hostname ]

rpcinfo -T transport hostname [ progname ]

rpcinfo [ -t | -u ] [ hostname ] [ progname ]

-m

Displays a table of statistics of the rpcbind operations

-s

Displays a concise list of all registered RPC programs

-T

Displays information about services that use specific transports or protocols

-t

Probes the RPC programs that use TCP

-u

Probes the RPC programs that use UDP

transport

Selects the transport or protocol for the services

hostname

Selects the host name of the server that you need information from

progname

Selects the RPC program to gather information about

If no value is given for hostname, the local host name is used. You can substitute the RPC program number for progname, but many users can remember the name and not the number. You can use the -p option in place of the -s option on those systems that do not run the NFS version 3 software.

The data that is generated by this command can include the following:

The following example gathers information about the RPC services that are running on a server. The text that is generated by the command is filtered by the sort command to make the output more readable. Several lines that list RPC services have been deleted from the example.


% rpcinfo -s bee |sort -n
   program version(s) netid(s)                         service     owner
    100000  2,3,4     udp6,tcp6,udp,tcp,ticlts,ticotsord,ticots rpcbind     superuser
    100001  4,3,2     ticlts,udp,udp6                  rstatd      superuser
    100002  3,2       ticots,ticotsord,tcp,tcp6,ticlts,udp,udp6 rusersd     superuser
    100003  3,2       tcp,udp,tcp6,udp6                nfs         superuser
    100005  3,2,1     ticots,ticotsord,tcp,tcp6,ticlts,udp,udp6 mountd      superuser
    100007  1,2,3     ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 ypbind      superuser
    100008  1         ticlts,udp,udp6                  walld       superuser
    100011  1         ticlts,udp,udp6                  rquotad     superuser
    100012  1         ticlts,udp,udp6                  sprayd      superuser
    100021  4,3,2,1   tcp,udp,tcp6,udp6                nlockmgr    superuser
    100024  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 status      superuser
    100029  3,2,1     ticots,ticotsord,ticlts          keyserv     superuser
    100068  5         tcp,udp                          cmsd        superuser
    100083  1         tcp,tcp6                         ttdbserverd superuser
    100099  3         ticotsord                        autofs      superuser
    100133  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
    100134  1         ticotsord                        tokenring   superuser
    100155  1         ticots,ticotsord,tcp,tcp6        smserverd   superuser
    100221  1         tcp,tcp6                         -           superuser
    100227  3,2       tcp,udp,tcp6,udp6                nfs_acl     superuser
    100229  1         tcp,tcp6                         metad       superuser
    100230  1         tcp,tcp6                         metamhd     superuser
    100231  1         ticots,ticotsord,ticlts          -           superuser
    100234  1         ticotsord                        gssd        superuser
    100235  1         tcp,tcp6                         -           superuser
    100242  1         tcp,tcp6                         metamedd    superuser
    100249  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
    300326  4         tcp,tcp6                         -           superuser
    300598  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
    390113  1         tcp                              -           unknown
 805306368  1         ticots,ticotsord,ticlts,tcp,udp,tcp6,udp6 -           superuser
1289637086  1,5       tcp                              -           26069

The following two examples show how to gather information about a particular RPC service by selecting a particular transport on a server. The first example checks the mountd service that is running over TCP. The second example checks the NFS service that is running over UDP.


% rpcinfo -t bee mountd
program 100005 version 1 ready and waiting
program 100005 version 2 ready and waiting
program 100005 version 3 ready and waiting
% rpcinfo -u bee nfs
program 100003 version 2 ready and waiting
program 100003 version 3 ready and waiting

snoop Command

This command is often used to watch for packets on the network. The snoop command must be run as root. The use of this command is a good way to ensure that the network hardware is functioning on both the client and the server. Many options are available. See the snoop(1M) man page. A shortened synopsis of the command follows:

snoop [ -d device ] [ -o filename ] [ host hostname ]

-d device

Specifies the local network interface

-o filename

Stores all the captured packets into the named file

hostname

Displays packets going to and from a specific host only

The -d device option is useful on those servers that have multiple network interfaces. You can use many expressions other than setting the host. A combination of command expressions with grep can often generate data that is specific enough to be useful.

When troubleshooting, make sure that packets are going to and from the proper host. Also, look for error messages. Saving the packets to a file can simplify the review of the data.

truss Command

You can use this command to check if a process is hung. The truss command must be run by the owner of the process or by root. You can use many options with this command. See the truss(1) man page. A shortened syntax of the command follows.

truss [ -t syscall ] -p pid

-t syscall

Selects system calls to trace

-p pid

Indicates the PID of the process to be traced

The syscall can be a comma-separated list of system calls to be traced. Also, starting syscall with an ! selects to exclude the listed system calls from the trace.

This example shows that the process is waiting for another connection request from a new client.


# /usr/bin/truss -p 243
poll(0x00024D50, 2, -1)         (sleeping...)

The previous example shows a normal response. If the response does not change after a new connection request has been made, the process could be hung. Follow the instructions in How to Restart NFS Services to fix the hung program. Review the instructions in NFS Troubleshooting Procedures to fully verify that your problem is a hung program.

NFS Over RDMA

Starting in the Solaris 10 release, the default transport for NFS is the Remote Direct Memory Access (RDMA) protocol, which is a technology for memory-to-memory transfer of data over high-speed networks. Specifically, RDMA provides remote data transfer directly to and from memory without CPU intervention. RDMA also provides direct data placement, which eliminates data copies and, therefore, further eliminates CPU intervention. Thus, RDMA relieves not only the host CPU, but also reduces contention for the host memory and I/O buses. To provide this capability, RDMA combines the interconnect I/O technology of InfiniBand on SPARC platforms with the Solaris operating system. The following figure shows the relationship of RDMA to other protocols, such as UDP and TCP.

Figure 6–1 Relationship of RDMA to Other Protocols

The context describes the graphic.

Because RDMA is the default transport protocol for NFS, no special share or mount options are required to use RDMA on a client or server. The existing automounter maps, vfstab and dfstab, work with the RDMA transport. NFS mounts over the RDMA transport occur transparently when InfiniBand connectivity exists on SPARC platforms between the client and the server. If the RDMA transport is not available on both the client and the server, the TCP transport is the initial fallback, followed by UDP if TCP is unavailable. Note, however, that if you use the proto=rdma mount option, NFS mounts are forced to use RDMA only.

To specify that TCP and UDP be used only, you can use the proto=tcp/udp mount option. This option disables RDMA on an NFS client. For more information about NFS mount options, see the mount_nfs(1M) man page and mount Command.


Note –

RDMA for InfiniBand uses the IP addressing format and the IP lookup infrastructure to specify peers. However, because RDMA is a separate protocol stack, it does not fully implement all IP semantics. For example, RDMA does not use IP addressing to communicate with peers. Therefore, RDMA might bypass configurations for various security policies that are based on IP addresses. However, the NFS and RPC administrative policies, such as mount restrictions and secure RPC, are not bypassed.


How the NFS Service Works

The following sections describe some of the complex functions of the NFS software. Note that some of the feature descriptions in this section are exclusive to NFS version 4.


Note –

If your system has zones enabled and you want to use this feature in a non-global zone, see System Administration Guide: Virtualization Using the Solaris Operating System for more information.


Version Negotiation in NFS

The NFS initiation process includes negotiating the protocol levels for servers and clients. If you do not specify the version level, then the best level is selected by default. For example, if both the client and the server can support version 3, then version 3 is used. If the client or the server can only support version 2, then version 2 is used.

Starting in the Solaris 10 release, you can set the keywords NFS_CLIENT_VERSMIN, NFS_CLIENT_VERSMAX, NFS_SERVER_VERSMIN, NFS_SERVER_VERSMAX in the /etc/default/nfs file. Your specified minimum and maximum values for the server and the client would replace the default values for these keywords. For both the client and the server the default minimum value is 2 and the default maximum value is 4. See Keywords for the /etc/default/nfs File. To find the version supported by the server, the NFS client begins with the setting for NFS_CLIENT_VERSMAX and continues to try each version until reaching the version setting for NFS_CLIENT_VERSMIN. As soon as the supported version is found, the process terminates. For example, if NFS_CLIENT_VERSMAX=4 and NFS_CLIENT_VERSMIN=2, then the client attempts version 4 first, then version 3, and finally version 2. If NFS_CLIENT_VERSMIN and NFS_CLIENT_VERSMAX are set to the same value, then the client always uses this version and does not attempt any other version. If the server does not offer this version, the mount fails.


Note –

You can override the values that are determined by the negotiation by using the vers option with the mount command. See the mount_nfs(1M) man page.


For procedural information, refer to Setting Up NFS Services.

Features in NFS Version 4

Many changes have been made to NFS in version 4. This section provides descriptions of these new features.


Note –

Starting in the Solaris 10 release, NFS version 4 does not support the LIPKEY/SPKM security flavor. Also, NFS version 4 does not use the mountd, nfslogd, and statd daemons.


For procedural information related to using NFS version 4, refer to Setting Up NFS Services.

Unsharing and Resharing a File System in NFS Version 4

With both NFS version 3 and version 4, if a client attempts to access a file system that has been unshared, the server responds with an error code. However, with NFS version 3 the server maintains any locks that the clients had obtained before the file system was unshared. Thus, when the file system is reshared, NFS version 3 clients can access the file system as though that file system had never been unshared.

With NFS version 4, when a file system is unshared, all the state for any open files or file locks in that file system is destroyed. If the client attempts to access these files or locks, the client receives an error. This error is usually reported as an I/O error to the application. Note, however, that resharing a currently shared file system to change options does not destroy any of the state on the server.

For related information, refer to Client Recovery in NFS Version 4 or see the unshare_nfs(1M) man page.

File-System Namespace in NFS Version 4

NFS version 4 servers create and maintain a pseudo-file system, which provides clients with seamless access to all exported objects on the server. Prior to NFS version 4, the pseudo-file system did not exist. Clients were forced to mount each shared server file system for access. Consider the following example.

Figure 6–2 Views of the Server File System and the Client File System

The context describes the graphic.

Note that the client cannot see the payroll directory and the nfs4x directory, because these directories are not exported and do not lead to exported directories. However, the local directory is visible to the client, because local is an exported directory. The projects directory is visible to the client, because projects leads to the exported directory, nfs4. Thus, portions of the server namespace that are not explicitly exported are bridged with a pseudo-file system that views only the exported directories and those directories that lead to server exports.

A pseudo-file system is a structure that contains only directories and is created by the server. The pseudo-file system permits a client to browse the hierarchy of exported file systems. Thus, the client's view of the pseudo-file system is limited to paths that lead to exported file systems.

Previous versions of NFS did not permit a client to traverse server file systems without mounting each file system. However, in NFS version 4, the server namespace does the following:

For POSIX-related reasons, the Solaris NFS version 4 client does not cross server file-system boundaries. When such attempts are made, the client makes the directory appear to be empty. To remedy this situation, you must perform a mount for each of the server's file systems.

Volatile File Handles in NFS Version 4

File handles are created on the server and contain information that uniquely identifies files and directories. In NFS versions 2 and 3 the server returned persistent file handles. Thus, the client could guarantee that the server would generate a file handle that always referred to the same file. For example:

Thus, when the server received a request from a client that included a file handle, the resolution was straightforward and the file handle always referred to the correct file.

This method of identifying files and directories for NFS operations was fine for most UNIX-based servers. However, the method could not be implemented on servers that relied on other methods of identification, such as a file's path name. To resolve this problem, the NFS version 4 protocol permits a server to declare that its file handles are volatile. Thus, a file handle could change. If the file handle does change, the client must find the new file handle.

Like NFS versions 2 and 3, the Solaris NFS version 4 server always provides persistent file handles. However, Solaris NFS version 4 clients that access non-Solaris NFS version 4 servers must support volatile file handles if the server uses them. Specifically, when the server tells the client that the file handle is volatile, the client must cache the mapping between path name and file handle. The client uses the volatile file handle until it expires. On expiration, the client does the following:


Note –

The server always tells the client which file handles are persistent and which file handles are volatile.


Volatile file handles might expire for any of these reasons:

Note that if the client is unable to find the new file handle, an error message is put in the syslog file. Further attempts to access this file fail with an I/O error.

Client Recovery in NFS Version 4

The NFS version 4 protocol is a stateful protocol. A protocol is stateful when both the client and the server maintain current information about the following.

When a failure occurs, such as a server crash, the client and the server work together to reestablish the open and lock states that existed prior to the failure.

When a server crashes and is rebooted, the server loses its state. The client detects that the server has rebooted and begins the process of helping the server rebuild its state. This process is known as client recovery, because the client directs the process.

When the client discovers that the server has rebooted, the client immediately suspends its current activity and begins the process of client recovery. When the recovery process starts, a message, such as the following, is displayed in the system error log /var/adm/messages.


NOTICE: Starting recovery server basil.example.company.com

During the recovery process, the client sends the server information about the client's previous state. Note, however, that during this period the client does not send any new requests to the server. Any new requests to open files or set file locks must wait for the server to complete its recovery period before proceeding.

When the client recovery process is complete, the following message is displayed in the system error log /var/adm/messages.


NOTICE: Recovery done for server basil.example.company.com

Now the client has successfully completed sending its state information to the server. However, even though the client has completed this process, other clients might not have completed their process of sending state information to the server. Therefore, for a period of time, the server does not accept any open or lock requests. This period of time, which is known as the grace period, is designated to permit all the clients to complete their recovery.

During the grace period, if the client attempts to open any new files or establish any new locks, the server denies the request with the GRACE error code. On receiving this error, the client must wait for the grace period to end and then resend the request to the server. During the grace period the following message is displayed.


NFS server recovering

Note that during the grace period the commands that do not open files or set file locks can proceed. For example, the commands ls and cd do not open a file or set a file lock. Thus, these commands are not suspended. However, a command such as cat, which opens a file, would be suspended until the grace period ends.

When the grace period has ended, the following message is displayed.


NFS server recovery ok.

The client can now send new open and lock requests to the server.

Client recovery can fail for a variety of reasons. For example, if a network partition exists after the server reboots, the client might not be able to reestablish its state with the server before the grace period ends. When the grace period has ended, the server does not permit the client to reestablish its state because new state operations could create conflicts. For example, a new file lock might conflict with an old file lock that the client is trying to recover. When such situations occur, the server returns the NO_GRACE error code to the client.

If the recovery of an open operation for a particular file fails, the client marks the file as unusable and the following message is displayed.


WARNING: The following NFS file could not be recovered and was marked dead 
(can't reopen:  NFS status 70):  file :  filename

Note that the number 70 is only an example.

If reestablishing a file lock during recovery fails, the following error message is posted.


NOTICE: nfs4_send_siglost:  pid PROCESS-ID lost
lock on server SERVER-NAME

In this situation, the SIGLOST signal is posted to the process. The default action for the SIGLOST signal is to terminate the process.

For you to recover from this state, you must restart any applications that had files open at the time of the failure. Note that the following can occur.

Thus, some processes can access a particular file while other processes cannot.

OPEN Share Support in NFS Version 4

The NFS version 4 protocol provides several file-sharing modes that the client can use to control file access by other clients. A client can specify the following:

The Solaris NFS version 4 server fully implements these file-sharing modes. Therefore, if a client attempts to open a file in a way that conflicts with the current share mode, the server denies the attempt by failing the operation. When such attempts fail with the initiation of the open or create operations, the Solaris NFS version 4 client receives a protocol error. This error is mapped to the application error EACCES.

Even though the protocol provides several sharing modes, currently the open operation in Solaris does not offer multiple sharing modes. When opening a file, a Solaris NFS version 4 client can only use the DENY_NONE mode.

Also, even though the Solaris fcntl system call has an F_SHARE command to control file sharing, the fcntl commands cannot be implemented correctly with NFS version 4. If you use these fcntl commands on an NFS version 4 client, the client returns the EAGAIN error to the application.

Delegation in NFS Version 4

NFS version 4 provides both client support and server support for delegation. Delegation is a technique by which the server delegates the management of a file to a client. For example, the server could grant either a read delegation or a write delegation to a client. Read delegations can be granted to multiple clients at the same time, because these read delegations do not conflict with each other. A write delegation can be granted to only one client, because a write delegation conflicts with any file access by any other client. While holding a write delegation, the client would not send various operations to the server because the client is guaranteed exclusive access to a file. Similarly, the client would not send various operations to the server while holding a read delegation. The reason is that the server guarantees that no client can open the file in write mode. The effect of delegation is to greatly reduce the interactions between the server and the client for delegated files. Therefore, network traffic is reduced, and performance on the client and the server is improved. Note, however, that the degree of performance improvement depends on the kind of file interaction used by an application and the amount of network and server congestion.

The decision about whether to grant a delegation is made entirely by the server. A client does not request a delegation. The server makes decisions about whether to grant a delegation, based on the access patterns for the file. If a file has been recently accessed in write mode by several different clients, the server might not grant a delegation. The reason is that this access pattern indicates the potential for future conflicts.

A conflict occurs when a client accesses a file in a manner that is inconsistent with the delegations that are currently granted for that file. For example, if a client holds a write delegation on a file and a second client opens that file for read or write access, the server recalls the first client's write delegation. Similarly, if a client holds a read delegation and another client opens the same file for writing, the server recalls the read delegation. Note that in both situations, the second client is not granted a delegation because a conflict now exists. When a conflict occurs, the server uses a callback mechanism to contact the client that currently holds the delegation. On receiving this callback, the client sends the file's updated state to the server and returns the delegation. If the client fails to respond to the recall, the server revokes the delegation. In such instances, the server rejects all operations from the client for this file, and the client reports the requested operations as failures. Generally, these failures are reported to the application as I/O errors. To recover from these errors, the file must be closed and then reopened. Failures from revoked delegations can occur when a network partition exists between the client and the server while the client holds a delegation.

Note that one server does not resolve access conflicts for a file that is stored on another server. Thus, an NFS server only resolves conflicts for files that it stores. Furthermore, in response to conflicts that are caused by clients that are running various versions of NFS, an NFS server can only initiate recalls to the client that is running NFS version 4. An NFS server cannot initiate recalls for clients that are running earlier versions of NFS.

The process for detecting conflicts varies. For example, unlike NFS version 4, because version 2 and version 3 do not have an open procedure, the conflict is detected only after the client attempts to read, write, or lock a file. The server's response to these conflicts varies also. For example:

These conditions clear when the delegation conflict has been resolved.

By default, server delegation is enabled. You can disable delegation by modifying the /etc/default/nfs file. For procedural information, refer to How to Select Different Versions of NFS on a Server.

No keywords are required for client delegation. The NFS version 4 callback daemon, nfs4cbd, provides the callback service on the client. This daemon is started automatically whenever a mount for NFS version 4 is enabled. By default, the client provides the necessary callback information to the server for all Internet transports that are listed in the /etc/netconfig system file. Note that if the client is enabled for IPv6 and if the IPv6 address for the client's name can be determined, then the callback daemon accepts IPv6 connections.

The callback daemon uses a transient program number and a dynamically assigned port number. This information is provided to the server, and the server tests the callback path before granting any delegations. If the callback path does not test successfully, the server does not grant delegations, which is the only externally visible behavior.

Note that because callback information is embedded within an NFS version 4 request, the server is unable to contact the client through a device that uses Network Address Translation (NAT). Also, the callback daemon uses a dynamic port number. Therefore, the server might not be able to traverse a firewall, even if that firewall enables normal NFS traffic on port 2049. In such situations, the server does not grant delegations.

ACLs and nfsmapid in NFS Version 4

An access control list (ACL) provides better file security by enabling the owner of a file to define file permissions for the file owner, the group, and other specific users and groups. ACLs are set on the server and the client by using the setfacl command. See the setfacl(1) man page. In NFS version 4, the ID mapper, nfsmapid, is used to map user or group IDs in ACL entries on a server to user or group IDs in ACL entries on a client. The reverse is also true. The user and group IDs in the ACL entries must exist on both the client and the server.

Reasons for ID Mapping to Fail

The following situations can cause ID mapping to fail:

Avoiding ID Mapping Problems With ACLs

To avoid ID mapping problems, do the following:

Checking for Unmapped User or Group IDs

To determine if any user or group cannot be mapped on the server or client, use the following script:


#! /usr/sbin/dtrace -Fs

sdt:::nfs4-acl-nobody
{
     printf("validate_idmapping: (%s) in the ACL could not be mapped!", 
stringof(arg0));
}

Note –

The probe name that is used in this script is an interface that could change in the future. For more information, see Stability Levels in Solaris Dynamic Tracing Guide.


Additional Information About ACLs or nfsmapid

See the following:

UDP and TCP Negotiation

During initiation, the transport protocol is also negotiated. By default, the first connection-oriented transport that is supported on both the client and the server is selected. If this selection does not succeed, the first available connectionless transport protocol is used. The transport protocols that are supported on a system are listed in /etc/netconfig. TCP is the connection-oriented transport protocol that is supported by the release. UDP is the connectionless transport protocol.

When both the NFS protocol version and the transport protocol are determined by negotiation, the NFS protocol version is given precedence over the transport protocol. The NFS version 3 protocol that uses UDP is given higher precedence than the NFS version 2 protocol that is using TCP. You can manually select both the NFS protocol version and the transport protocol with the mount command. See the mount_nfs(1M) man page. Under most conditions, allow the negotiation to select the best options.

File Transfer Size Negotiation

The file transfer size establishes the size of the buffers that are used when transferring data between the client and the server. In general, larger transfer sizes are better. The NFS version 3 protocol has an unlimited transfer size. However, starting with the Solaris 2.6 release, the software bids a default buffer size of 32 Kbytes. The client can bid a smaller transfer size at mount time if needed, but under most conditions this bid is not necessary.

The transfer size is not negotiated with systems that use the NFS version 2 protocol. Under this condition, the maximum transfer size is set to 8 Kbytes.

You can use the -rsize and -wsize options to set the transfer size manually with the mount command. You might need to reduce the transfer size for some PC clients. Also, you can increase the transfer size if the NFS server is configured to use larger transfer sizes.


Note –

Starting in the Solaris 10 release, restrictions on wire transfer sizes have been relaxed. The transfer size is based on the capabilities of the underlying transport. For example, the NFS transfer limit for UDP is still 32 Kbytes. However, because TCP is a streaming protocol without the datagram limits of UDP, maximum transfer sizes over TCP have been increased to 1 Mbyte.


How File Systems Are Mounted

The following description applies to NFS version 3 mounts. The NFS version 4 mount process does not include the portmap service nor does it include the MOUNT protocol.

When a client needs to mount a file system from a server, the client must obtain a file handle from the server. The file handle must correspond to the file system. This process requires that several transactions occur between the client and the server. In this example, the client is attempting to mount /home/terry from the server. A snoop trace for this transaction follows.


client -> server PORTMAP C GETPORT prog=100005 (MOUNT) vers=3 proto=UDP
server -> client PORTMAP R GETPORT port=33492
client -> server MOUNT3 C Null
server -> client MOUNT3 R Null 
client -> server MOUNT3 C Mount /export/home9/terry
server -> client MOUNT3 R Mount OK FH=9000 Auth=unix
client -> server PORTMAP C GETPORT prog=100003 (NFS) vers=3 proto=TCP
server -> client PORTMAP R GETPORT port=2049
client -> server NFS C NULL3
server -> client NFS R NULL3 
client -> server NFS C FSINFO3 FH=9000
server -> client NFS R FSINFO3 OK
client -> server NFS C GETATTR3 FH=9000
server -> client NFS R GETATTR3 OK

In this trace, the client first requests the mount port number from the portmap service on the NFS server. After the client receives the mount port number (33492), that number is used to test the availability of the service on the server. After the client has determined that a service is running on that port number, the client then makes a mount request. When the server responds to this request, the server includes the file handle for the file system (9000) being mounted. The client then sends a request for the NFS port number. When the client receives the number from the server, the client tests the availability of the NFS service (nfsd). Also, the client requests NFS information about the file system that uses the file handle.

In the following trace, the client is mounting the file system with the public option.


client -> server NFS C LOOKUP3 FH=0000 /export/home9/terry
server -> client NFS R LOOKUP3 OK FH=9000
client -> server NFS C FSINFO3 FH=9000
server -> client NFS R FSINFO3 OK
client -> server NFS C GETATTR3 FH=9000
server -> client NFS R GETATTR3 OK

By using the default public file handle (which is 0000), all the transactions to obtain information from the portmap service and to determine the NFS port number are skipped.


Note –

NFS version 4 provides support for volatile file handles. For more information, refer to Volatile File Handles in NFS Version 4.


Effects of the -public Option and NFS URLs When Mounting

Using the -public option can create conditions that cause a mount to fail. Adding an NFS URL can also confuse the situation. The following list describes the specifics of how a file system is mounted when you use these options.

Client-Side Failover

By using client-side failover, an NFS client can be aware of multiple servers that are making the same data available and can switch to an alternate server when the current server is unavailable. The file system can become unavailable if one of the following occurs.

The failover, under these conditions, is normally transparent to the user. Thus, the failover can occur at any time without disrupting the processes that are running on the client.

Failover requires that the file system be mounted read-only. The file systems must be identical for the failover to occur successfully. See What Is a Replicated File System? for a description of what makes a file system identical. A static file system or a file system that is not changed often is the best candidate for failover.

You cannot use CacheFS and client-side failover on the same NFS mount. Extra information is stored for each CacheFS file system. This information cannot be updated during failover, so only one of these two features can be used when mounting a file system.

The number of replicas that need to be established for every file system depends on many factors. Ideally, you should have a minimum of two servers. Each server should support multiple subnets. This setup is better than having a unique server on each subnet. The process requires that each listed server be checked. Therefore, if more servers are listed, each mount is slower.

Failover Terminology

To fully comprehend the process, you need to understand two terms.

What Is a Replicated File System?

For the purposes of failover, a file system can be called a replica when each file is the same size and has the same file size or file type as the original file system. Permissions, creation dates, and other file attributes are not considered. If the file size or file types are different, the remap fails and the process hangs until the old server becomes available. In NFS version 4, the behavior is different. See Client-Side Failover in NFS Version 4.

You can maintain a replicated file system by using rdist, cpio, or another file transfer mechanism. Because updating the replicated file systems causes inconsistency, for best results consider these precautions:

Failover and NFS Locking

Some software packages require read locks on files. To prevent these products from breaking, read locks on read-only file systems are allowed but are visible to the client side only. The locks persist through a remap because the server does not “know” about the locks. Because the files should not change, you do not need to lock the file on the server side.

Client-Side Failover in NFS Version 4

In NFS version 4, if a replica cannot be established because the file sizes are different or the file types are not the same, then the following happens.


Note –

If you restart the application and try again to access the file, you should be successful.


In NFS version 4, you no longer receive replication errors for directories of different sizes. In prior versions of NFS, this condition was treated as an error and would impede the remapping process.

Furthermore, in NFS version 4, if a directory read operation is unsuccessful, the operation is performed by the next listed server. In previous versions of NFS, unsuccessful read operations would cause the remap to fail and the process to hang until the original server was available.

Large Files

Starting with the Solaris 2.6 release, the Solaris OS supports files that are over 2 Gbytes. By default, UFS file systems are mounted with the -largefiles option to support the new capability. Previous releases cannot handle files of this size. See How to Disable Large Files on an NFS Server for instructions.

If the server's file system is mounted with the -largefiles option, a Solaris 2.6 NFS client can access large files without the need for changes. However, not all Solaris 2.6 commands can handle these large files. See largefile(5) for a list of the commands that can handle the large files. Clients that cannot support the NFS version 3 protocol with the large file extensions cannot access any large files. Although clients that run the Solaris 2.5 release can use the NFS version 3 protocol, large file support was not included in that release.

How NFS Server Logging Works

NFS server logging provides records of NFS reads and writes, as well as operations that modify the file system. This data can be used to track access to information. In addition, the records can provide a quantitative way to measure interest in the information.

When a file system with logging enabled is accessed, the kernel writes raw data into a buffer file. This data includes the following:

The nfslogd daemon converts this raw data into ASCII records that are stored in log files. During the conversion, the IP addresses are modified to host names and the UIDs are modified to logins if the name service that is enabled can find matches. The file handles are also converted into path names. To accomplish the conversion, the daemon tracks the file handles and stores information in a separate file handle-to-path table. That way, the path does not have to be identified again each time a file handle is accessed. Because no changes to the mappings are made in the file handle-to-path table if nfslogd is turned off, you must keep the daemon running.


Note –

Server logging is not supported in NFS version 4.


How the WebNFS Service Works

The WebNFS service makes files in a directory available to clients by using a public file handle. A file handle is an address that is generated by the kernel that identifies a file for NFS clients. The public file handle has a predefined value, so the server does not need to generate a file handle for the client. The ability to use this predefined file handle reduces network traffic by eliminating the MOUNT protocol. This ability should also accelerate processes for the clients.

By default, the public file handle on an NFS server is established on the root file system. This default provides WebNFS access to any clients that already have mount privileges on the server. You can change the public file handle to point to any file system by using the share command.

When the client has the file handle for the file system, a LOOKUP is run to determine the file handle for the file to be accessed. The NFS protocol allows the evaluation of only one path name component at a time. Each additional level of directory hierarchy requires another LOOKUP. A WebNFS server can evaluate an entire path name with a single multi-component lookup transaction when the LOOKUP is relative to the public file handle. Multi-component lookup enables the WebNFS server to deliver the file handle to the desired file without exchanging the file handles for each directory level in the path name.

In addition, an NFS client can initiate concurrent downloads over a single TCP connection. This connection provides quick access without the additional load on the server that is caused by setting up multiple connections. Although web browser applications support concurrent downloading of multiple files, each file has its own connection. By using one connection, the WebNFS software reduces the overhead on the server.

If the final component in the path name is a symbolic link to another file system, the client can access the file if the client already has access through normal NFS activities.

Normally, an NFS URL is evaluated relative to the public file handle. The evaluation can be changed to be relative to the server's root file system by adding an additional slash to the beginning of the path. In this example, these two NFS URLs are equivalent if the public file handle has been established on the /export/ftp file system.


nfs://server/junk
nfs://server//export/ftp/junk

Note –

The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.


How WebNFS Security Negotiation Works

The Solaris 8 release includes a new protocol that enables a WebNFS client to negotiate a selected security mechanism with a WebNFS server. The new protocol uses security negotiation multi-component lookup, which is an extension to the multi-component lookup that was used in earlier versions of the WebNFS protocol.

The WebNFS client initiates the process by making a regular multi–component lookup request by using the public file handle. Because the client has no knowledge of how the path is protected by the server, the default security mechanism is used. If the default security mechanism is not sufficient, the server replies with an AUTH_TOOWEAK error. This reply indicates that the default mechanism is not valid. The client needs to use a stronger default mechanism.

When the client receives the AUTH_TOOWEAK error, the client sends a request to the server to determine which security mechanisms are required. If the request succeeds, the server responds with an array of security mechanisms that are required for the specified path. Depending on the size of the array of security mechanisms, the client might have to make more requests to obtain the complete array. If the server does not support WebNFS security negotiation, the request fails.

After a successful request, the WebNFS client selects the first security mechanism from the array that the client supports. The client then issues a regular multi-component lookup request by using the selected security mechanism to acquire the file handle. All subsequent NFS requests are made by using the selected security mechanism and the file handle.


Note –

The NFS version 4 protocol is preferred over the WebNFS service. NFS version 4 fully integrates all the security negotiation that was added to the MOUNT protocol and the WebNFS service.


WebNFS Limitations With Web Browser Use

Several functions that a web site that uses HTTP can provide are not supported by the WebNFS software. These differences stem from the fact that the NFS server only sends the file, so any special processing must be done on the client. If you need to have one web site configured for both WebNFS and HTTP access, consider the following issues:

Secure NFS System

The NFS environment is a powerful way and convenient way to share file systems on a network of different computer architectures and operating systems. However, the same features that make sharing file systems through NFS operation convenient also pose some security problems. Historically, most NFS implementations have used UNIX (or AUTH_SYS) authentication, but stronger authentication methods such as AUTH_DH have also been available. When using UNIX authentication, an NFS server authenticates a file request by authenticating the computer that makes the request, but not the user. Therefore, a client user can run su and impersonate the owner of a file. If DH authentication is used, the NFS server authenticates the user, making this sort of impersonation much harder.

With root access and knowledge of network programming, anyone can introduce arbitrary data into the network and extract any data from the network. The most dangerous attacks are those attacks that involve the introduction of data. An example is the impersonation of a user by generating the right packets or by recording “conversations” and replaying them later. These attacks affect data integrity. Attacks that involve passive eavesdropping, which is merely listening to network traffic without impersonating anybody, are not as dangerous, because data integrity is not compromised. Users can protect the privacy of sensitive information by encrypting data that is sent over the network.

A common approach to network security problems is to leave the solution to each application. A better approach is to implement a standard authentication system at a level that covers all applications.

The Solaris operating system includes an authentication system at the level of the remote procedure call (RPC), which is the mechanism on which the NFS operation is built. This system, known as Secure RPC, greatly improves the security of network environments and provides additional security to services such as the NFS system. When the NFS system uses the facilities that are provided by Secure RPC, it is known as a Secure NFS system.

Secure RPC

Secure RPC is fundamental to the Secure NFS system. The goal of Secure RPC is to build a system that is at minimum as secure as a time-sharing system. In a time-sharing system all users share a single computer. A time-sharing system authenticates a user through a login password. With Data Encryption Standard (DES) authentication, the same authentication process is completed. Users can log in on any remote computer just as users can log in on a local terminal. The users' login passwords are their assurance of network security. In a time-sharing environment, the system administrator has an ethical obligation not to change a password to impersonate someone. In Secure RPC, the network administrator is trusted not to alter entries in a database that stores public keys.

You need to be familiar with two terms to understand an RPC authentication system: credentials and verifiers. Using ID badges as an example, the credential is what identifies a person: a name, address, and birthday. The verifier is the photo that is attached to the badge. You can be sure that the badge has not been stolen by checking the photo on the badge against the person who is carrying the badge. In RPC, the client process sends both a credential and a verifier to the server with each RPC request. The server sends back only a verifier because the client already “knows” the server's credentials.

RPC's authentication is open ended, which means that a variety of authentication systems can be plugged into it, such as UNIX, DH, and KERB.

When UNIX authentication is used by a network service, the credentials contain the client's host name, UID, GID, and group-access list. However, the verifier contains nothing. Because no verifier exists, a superuser could falsify appropriate credentials by using commands such as su. Another problem with UNIX authentication is that UNIX authentication assumes all computers on a network are UNIX computers. UNIX authentication breaks down when applied to other operating systems in a heterogeneous network.

To overcome the problems of UNIX authentication, Secure RPC uses DH authentication.

DH Authentication

DH authentication uses the Data Encryption Standard (DES) and Diffie-Hellman public-key cryptography to authenticate both users and computers in the network. DES is a standard encryption mechanism. Diffie-Hellman public-key cryptography is a cipher system that involves two keys: one public and one secret. The public keys and secret keys are stored in the namespace. NIS stores the keys in the public-key map. These maps contain the public key and secret key for all potential users. See the System Administration Guide: Naming and Directory Services (DNS, NIS, and LDAP) for more information about how to set up the maps.

The security of DH authentication is based on a sender's ability to encrypt the current time, which the receiver can then decrypt and check against its own clock. The timestamp is encrypted with DES. The requirements for this scheme to work are as follows:

If a network runs a time-synchronization program, the time on the client and the server is synchronized automatically. If a time-synchronization program is not available, timestamps can be computed by using the server's time instead of the network time. The client asks the server for the time before starting the RPC session, then computes the time difference between its own clock and the server's. This difference is used to offset the client's clock when computing timestamps. If the client and server clocks become unsynchronized the server begins to reject the client's requests. The DH authentication system on the client resynchronizes with the server.

The client and server arrive at the same encryption key by generating a random conversation key, also known as the session key, and by using public-key cryptography to deduce a common key. The common key is a key that only the client and server are capable of deducing. The conversation key is used to encrypt and decrypt the client's timestamp. The common key is used to encrypt and decrypt the conversation key.

KERB Authentication

Kerberos is an authentication system that was developed at MIT. Kerberos offers a variety of encryption types, including DES. Kerberos support is no longer supplied as part of Secure RPC, but starting in the Solaris 9 release a server-side and client-side implementation is included. See Chapter 21, Introduction to the Kerberos Service, in System Administration Guide: Security Services for more information about the implementation of Kerberos authentication.

Using Secure RPC With NFS

Be aware of the following points if you plan to use Secure RPC:

How Mirrormounts Work

The Solaris Express, Developer Edition 1/08 release includes a new mounting facility called mirrormounts. Mirrormounts allow a NFSv4 client to access files in a file system as soon as the file system is shared on an NFSv4 server. The files can be accessed without the overhead of using the mount command or updating autofs maps. In effect, once a NFSv4 file system is mounted on a client, any other file systems from that server could also be mounted.

When to Use Mirrormounts

Generally, using the mirrormount facility is optimal for your NFSv4 clients except when you:

Mounting a File System Using Mirrormounts

If a file system is mounted on an NFSv4 client using manual mounts or autofs, any additional file systems added to the mounted file system, may be mounted on the client using the mirrormount facility. The client requests access to the new file system using the same mount options as were used on the parent directory. If the mount fails for any reason, the normal NFSv4 security negotiations occur between the server and the client to adjust the mount options so that the mount request succeeds.

Where there is an existing automount trigger point setup for a particular server file system, the automount trigger takes precedence over mirrormounting, so a mirrormount will not occur for that file system. To use mirrormounts in this case, the automount entry would need to be removed.

For specific instructions on how to get mirrormounts to work see:

Unmounting a File System Using Mirrormounts

Mirrormounted file systems will be automatically unmounted if idle, after a certain period of inactivity. The period is set using the AUTOMOUNT_TIMEOUT property in /etc/default/autofs, which is used by the automounter for the same purpose.

If an NFS file system is manually unmounted, then any mirrormounted file systems contained within it will also be unmounted, if idle. If there is an active mirrormounted file system within, the manual unmount will fail, as though that original file system were busy. A forced unmount will, however, be propagated through to all enclosed mirror-mounted file systems.

If a file system boundary is encountered within an automounted file system, a mirrormount will occur. When the automounter unmounts the parent filesystem, any mirror-mounted file systems within it will also be automatically unmounted, if idle. If there is an active mirrormounted file system, the automatic unmount will not occur, which preserves current automount behavior.

Autofs Maps

Autofs uses three types of maps:

Master Autofs Map

The auto_master map associates a directory with a map. The map is a master list that specifies all the maps that autofs should check. The following example shows what an auto_master file could contain.


Example 6–3 Sample /etc/auto_master File


# Master map for automounter 
# 
+auto_master 
/net            -hosts           -nosuid,nobrowse 
/home           auto_home        -nobrowse 
/-              auto_direct     -ro  

This example shows the generic auto_master file with one addition for the auto_direct map. Each line in the master map /etc/auto_master has the following syntax:

mount-point map-name [ mount-options ]

mount-point

mount-point is the full (absolute) path name of a directory. If the directory does not exist, autofs creates the directory if possible. If the directory exists and is not empty, mounting on the directory hides its contents. In this situation, autofs issues a warning.

The notation /- as a mount point indicates that this particular map is a direct map. The notation also means that no particular mount point is associated with the map.

map-name

map-name is the map autofs uses to find directions to locations, or mount information. If the name is preceded by a slash (/), autofs interprets the name as a local file. Otherwise, autofs searches for the mount information by using the search that is specified in the name-service switch configuration file (/etc/nsswitch.conf). Special maps are also used for /net. See Mount Point /net for more information.

mount-options

mount-options is an optional, comma-separated list of options that apply to the mounting of the entries that are specified in map-name, unless the entries in map-name list other options. Options for each specific type of file system are listed in the mount man page for that file system. For example, see the mount_nfs(1M) man page for NFS-specific mount options. For NFS-specific mount points, the bg (background) and fg (foreground) options do not apply.

A line that begins with # is a comment. All the text that follows until the end of the line is ignored.

To split long lines into shorter ones, put a backslash (\) at the end of the line. The maximum number of characters of an entry is 1024.


Note –

If the same mount point is used in two entries, the first entry is used by the automount command. The second entry is ignored.


Mount Point /home

The mount point /home is the directory under which the entries that are listed in /etc/auto_home (an indirect map) are to be mounted.


Note –

Autofs runs on all computers and supports /net and /home (automounted home directories) by default. These defaults can be overridden by entries in the NIS auto.master map or NIS+ auto_master table, or by local editing of the /etc/auto_master file.


Mount Point /net

Autofs mounts under the directory /net all the entries in the special map -hosts. The map is a built-in map that uses only the hosts database. Suppose that the computer gumbo is in the hosts database and it exports any of its file systems. The following command changes the current directory to the root directory of the computer gumbo.


% cd /net/gumbo

Autofs can mount only the exported file systems of host gumbo, that is, those file systems on a server that are available to network users instead of those file systems on a local disk. Therefore, all the files and directories on gumbo might not be available through /net/gumbo.

With the /net method of access, the server name is in the path and is location dependent. If you want to move an exported file system from one server to another, the path might no longer work. Instead, you should set up an entry in a map specifically for the file system you want rather than use /net.


Note –

Autofs checks the server's export list only at mount time. After a server's file systems are mounted, autofs does not check with the server again until the server's file systems are automatically unmounted. Therefore, newly exported file systems are not “seen” until the file systems on the client are unmounted and then remounted.


Direct Autofs Maps

A direct map is an automount point. With a direct map, a direct association exists between a mount point on the client and a directory on the server. Direct maps have a full path name and indicate the relationship explicitly. The following is a typical /etc/auto_direct map:


/usr/local          -ro \
   /bin                   ivy:/export/local/sun4 \
   /share                 ivy:/export/local/share \
   /src                   ivy:/export/local/src
/usr/man            -ro   oak:/usr/man \
                          rose:/usr/man \
                          willow:/usr/man 
/usr/games          -ro   peach:/usr/games 
/usr/spool/news     -ro   pine:/usr/spool/news \
                          willow:/var/spool/news 

Lines in direct maps have the following syntax:

key [ mount-options ] location

key

key is the path name of the mount point in a direct map.

mount-options

mount-options is the options that you want to apply to this particular mount. These options are required only if the options differ from the map default. Options for each specific type of file system are listed in the mount man page for that file system. For example, see the mount_cachefs(1M) man page for CacheFS specific mount options. For information about using CacheFS options with different versions of NFS, see Accessing NFS File Systems Using CacheFS.

location

location is the location of the file system. One or more file systems are specified as server:pathname for NFS file systems or :devicename for High Sierra file systems (HSFS).


Note –

The pathname should not include an automounted mount point. The pathname should be the actual absolute path to the file system. For instance, the location of a home directory should be listed as server:/export/home/username, not as server:/home/username.


As in the master map, a line that begins with # is a comment. All the text that follows until the end of the line is ignored. Put a backslash at the end of the line to split long lines into shorter ones.

Of all the maps, the entries in a direct map most closely resemble the corresponding entries in /etc/vfstab. An entry might appear in /etc/vfstab as follows:


dancer:/usr/local - /usr/local/tmp nfs - yes ro 

The equivalent entry appears in a direct map as follows:


/usr/local/tmp     -ro     dancer:/usr/local

Note –

No concatenation of options occurs between the automounter maps. Any options that are added to an automounter map override all options that are listed in maps that are searched earlier. For instance, options that are included in the auto_master map would be overridden by corresponding entries in any other map.


See How Autofs Selects the Nearest Read-Only Files for Clients (Multiple Locations) for other important features that are associated with this type of map.

Mount Point /-

In Example 6–3, the mount point /- tells autofs not to associate the entries in auto_direct with any specific mount point. Indirect maps use mount points that are defined in the auto_master file. Direct maps use mount points that are specified in the named map. Remember, in a direct map the key, or mount point, is a full path name.

An NIS or NIS+ auto_master file can have only one direct map entry because the mount point must be a unique value in the namespace. An auto_master file that is a local file can have any number of direct map entries if entries are not duplicated.

Indirect Autofs Maps

An indirect map uses a substitution value of a key to establish the association between a mount point on the client and a directory on the server. Indirect maps are useful for accessing specific file systems, such as home directories. The auto_home map is an example of an indirect map.

Lines in indirect maps have the following general syntax:

key [ mount-options ] location

key

key is a simple name without slashes in an indirect map.

mount-options

mount-options is the options that you want to apply to this particular mount. These options are required only if the options differ from the map default. Options for each specific type of file system are listed in the mount man page for that file system. For example, see the mount_nfs(1M) man page for NFS-specific mount options.

location

location is the location of the file system. One or more file systems are specified as server:pathname.


Note –

The pathname should not include an automounted mount point. The pathname should be the actual absolute path to the file system. For instance, the location of a directory should be listed as server:/usr/local, not as server:/net/server/usr/local.


As in the master map, a line that begins with # is a comment. All the text that follows until the end of the line is ignored. Put a backslash (\) at the end of the line to split long lines into shorter ones. Example 6–3 shows an auto_master map that contains the following entry:


/home      auto_home        -nobrowse    

auto_home is the name of the indirect map that contains the entries to be mounted under /home. A typical auto_home map might contain the following:


david                  willow:/export/home/david
rob                    cypress:/export/home/rob
gordon                 poplar:/export/home/gordon
rajan                  pine:/export/home/rajan
tammy                  apple:/export/home/tammy
jim                    ivy:/export/home/jim
linda    -rw,nosuid    peach:/export/home/linda

As an example, assume that the previous map is on host oak. Suppose that the user linda has an entry in the password database that specifies her home directory as /home/linda. Whenever linda logs in to computer oak, autofs mounts the directory /export/home/linda that resides on the computer peach. Her home directory is mounted read-write, nosuid.

Assume the following conditions occur: User linda's home directory is listed in the password database as /home/linda. Anybody, including Linda, has access to this path from any computer that is set up with the master map referring to the map in the previous example.

Under these conditions, user linda can run login or rlogin on any of these computers and have her home directory mounted in place for her.

Furthermore, now Linda can also type the following command:


% cd ~david

autofs mounts David's home directory for her (if all permissions allow).


Note –

No concatenation of options occurs between the automounter maps. Any options that are added to an automounter map override all options that are listed in maps that are searched earlier. For instance, options that are included in the auto_master map are overridden by corresponding entries in any other map.


On a network without a name service, you have to change all the relevant files (such as /etc/passwd) on all systems on the network to allow Linda access to her files. With NIS, make the changes on the NIS master server and propagate the relevant databases to the slave servers. On a network that is running NIS+, propagating the relevant databases to the slave servers is done automatically after the changes are made.

How Autofs Works

Autofs is a client-side service that automatically mounts the appropriate file system. The components that work together to accomplish automatic mounting are the following:

The automount service, svc:/system/filesystem/autofs, which is called at system startup time, reads the master map file auto_master to create the initial set of autofs mounts. These autofs mounts are not automatically mounted at startup time. These mounts are points under which file systems are mounted in the future. These points are also known as trigger nodes.

After the autofs mounts are set up, these mounts can trigger file systems to be mounted under them. For example, when autofs receives a request to access a file system that is not currently mounted, autofs calls automountd, which actually mounts the requested file system.

After initially mounting autofs mounts, the automount command is used to update autofs mounts as necessary. The command compares the list of mounts in the auto_master map with the list of mounted file systems in the mount table file /etc/mnttab (formerly /etc/mtab). automount then makes the appropriate changes. This process allows system administrators to change mount information within auto_master and have those changes used by the autofs processes without stopping and restarting the autofs daemon. After the file system is mounted, further access does not require any action from automountd until the file system is automatically unmounted.

Unlike mount, automount does not read the /etc/vfstab file (which is specific to each computer) for a list of file systems to mount. The automount command is controlled within a domain and on computers through the namespace or local files.

The following is a simplified overview of how autofs works.

The automount daemon automountd is started at boot time by the service svc:/system/filesystem/autofs. See Figure 6–3. This service also runs the automount command, which reads the master map and installs autofs mount points. See How Autofs Starts the Navigation Process (Master Map) for more information.

Figure 6–3 svc:/system/filesystem/autofs Service Starts automount

The context describes the graphic.

Autofs is a kernel file system that supports automatic mounting and unmounting.

    When a request is made to access a file system at an autofs mount point, the following occurs:

  1. Autofs intercepts the request.

  2. Autofs sends a message to the automountd for the requested file system to be mounted.

  3. automountd locates the file system information in a map, creates the trigger nodes, and performs the mount.

  4. Autofs allows the intercepted request to proceed.

  5. Autofs unmounts the file system after a period of inactivity.


Note –

Mounts that are managed through the autofs service should not be manually mounted or unmounted. Even if the operation is successful, the autofs service does not check that the object has been unmounted, resulting in possible inconsistencies. A reboot clears all the autofs mount points.


How Autofs Navigates Through the Network (Maps)

Autofs searches a series of maps to navigate through the network. Maps are files that contain information such as the password entries of all users on a network or the names of all host computers on a network. Effectively, the maps contain network-wide equivalents of UNIX administration files. Maps are available locally or through a network name service such as NIS or NIS+. You create maps to meet the needs of your environment by using the Solaris Management Console tools. See Modifying How Autofs Navigates the Network (Modifying Maps).

How Autofs Starts the Navigation Process (Master Map)

The automount command reads the master map at system startup. Each entry in the master map is a direct map name or an indirect map name, its path, and its mount options, as shown in Figure 6–4. The specific order of the entries is not important. automount compares entries in the master map with entries in the mount table to generate a current list.

Figure 6–4 Navigation Through the Master Map

The context describes the graphic.

Autofs Mount Process

What the autofs service does when a mount request is triggered depends on how the automounter maps are configured. The mount process is generally the same for all mounts. However, the final result changes with the mount point that is specified and the complexity of the maps. Starting with the Solaris 2.6 release, the mount process has also been changed to include the creation of the trigger nodes.

Simple Autofs Mount

To help explain the autofs mount process, assume that the following files are installed.


$ cat /etc/auto_master
# Master map for automounter
#
+auto_master
/net        -hosts        -nosuid,nobrowse
/home       auto_home     -nobrowse
/share      auto_share
$ cat /etc/auto_share
# share directory map for automounter
#
ws          gumbo:/export/share/ws

When the /share directory is accessed, the autofs service creates a trigger node for /share/ws, which is an entry in /etc/mnttab that resembles the following entry:


-hosts  /share/ws     autofs  nosuid,nobrowse,ignore,nest,dev=###

    When the /share/ws directory is accessed, the autofs service completes the process with these steps:

  1. Checks the availability of the server's mount service.

  2. Mounts the requested file system under /share. Now the /etc/mnttab file contains the following entries.


    -hosts  /share/ws     autofs  nosuid,nobrowse,ignore,nest,dev=###
    gumbo:/export/share/ws /share/ws   nfs   nosuid,dev=####    #####

Hierarchical Mounting

When multiple layers are defined in the automounter files, the mount process becomes more complex. Suppose that you expand the /etc/auto_shared file from the previous example to contain the following:


# share directory map for automounter
#
ws       /       gumbo:/export/share/ws
         /usr    gumbo:/export/share/ws/usr

The mount process is basically the same as the previous example when the /share/ws mount point is accessed. In addition, a trigger node to the next level (/usr) is created in the /share/ws file system so that the next level can be mounted if it is accessed. In this example, /export/share/ws/usr must exist on the NFS server for the trigger node to be created.


Caution – Caution –

Do not use the -soft option when specifying hierarchical layers. Refer to Autofs Unmounting for an explanation of this limitation.


Autofs Unmounting

The unmounting that occurs after a certain amount of idle time is from the bottom up (reverse order of mounting). If one of the directories at a higher level in the hierarchy is busy, only file systems below that directory are unmounted. During the unmounting process, any trigger nodes are removed and then the file system is unmounted. If the file system is busy, the unmount fails and the trigger nodes are reinstalled.


Caution – Caution –

Do not use the -soft option when specifying hierarchical layers. If the -soft option is used, requests to reinstall the trigger nodes can time out. The failure to reinstall the trigger nodes leaves no access to the next level of mounts. The only way to clear this problem is to have the automounter unmount all of the components in the hierarchy. The automounter can complete the unmounting either by waiting for the file systems to be automatically unmounted or by rebooting the system.


How Autofs Selects the Nearest Read-Only Files for Clients (Multiple Locations)

The example direct map contains the following:


/usr/local          -ro \
   /bin                   ivy:/export/local/sun4\
   /share                 ivy:/export/local/share\
   /src                   ivy:/export/local/src
/usr/man            -ro   oak:/usr/man \
                          rose:/usr/man \
                          willow:/usr/man
/usr/games          -ro   peach:/usr/games
/usr/spool/news     -ro   pine:/usr/spool/news \
                          willow:/var/spool/news 

The mount points /usr/man and /usr/spool/news list more than one location, three locations for the first mount point, two locations for the second mount point. Any of the replicated locations can provide the same service to any user. This procedure is sensible only when you mount a file system that is read-only, as you must have some control over the locations of files that you write or modify. You want to avoid modifying files on one server on one occasion and, minutes later, modifying the “same” file on another server. The benefit is that the best available server is used automatically without any effort that is required by the user.

If the file systems are configured as replicas (see What Is a Replicated File System?), the clients have the advantage of using failover. Not only is the best server automatically determined, but if that server becomes unavailable, the client automatically uses the next-best server. Failover was first implemented in the Solaris 2.6 release.

An example of a good file system to configure as a replica is man pages. In a large network, more than one server can export the current set of man pages. Which server you mount the man pages from does not matter if the server is running and exporting its file systems. In the previous example, multiple mount locations are expressed as a list of mount locations in the map entry.


/usr/man -ro oak:/usr/man rose:/usr/man willow:/usr/man 

In this example, you can mount the man pages from the servers oak, rose, or willow. Which server is best depends on a number of factors, including the following:

During the sorting process, a count is taken of the number of servers that support each version of the NFS protocol. Whichever version of the protocol is supported on the most servers becomes the protocol that is used by default. This selection provides the client with the maximum number of servers to depend on.

After the largest subset of servers with the same version of the protocol is found, that server list is sorted by proximity. To determine proximity IPv4 addresses are inspected. The IPv4 addresses show which servers are in each subnet. Servers on a local subnet are given preference over servers on a remote subnet. Preference for the closest server reduces latency and network traffic.


Note –

Proximity cannot be determined for replicas that are using IPv6 addresses.


Figure 6–5 illustrates server proximity.

Figure 6–5 Server Proximity

The context describes the graphic.

If several servers that support the same protocol are on the local subnet, the time to connect to each server is determined and the fastest server is used. The sorting can also be influenced by using weighting (see Autofs and Weighting).

For example, if version 4 servers are more abundant, version 4 becomes the protocol that is used by default. However, now the sorting process is more complex. Here are some examples of how the sorting process works:


Note –

Weighting is also influenced by keyword values in the /etc/default/nfs file. Specifically, values for NFS_SERVER_VERSMIN, NFS_CLIENT_VERSMIN, NFS_SERVER_VERSMAX, and NFS_CLIENT_VERSMAX can make some versions be excluded from the sorting process. For more information about these keywords, see Keywords for the /etc/default/nfs File.


With failover, the sorting is checked at mount time when a server is selected. Multiple locations are useful in an environment where individual servers might not export their file systems temporarily.

Failover is particularly useful in a large network with many subnets. Autofs chooses the appropriate server and is able to confine NFS network traffic to a segment of the local network. If a server has multiple network interfaces, you can list the host name that is associated with each network interface as if the interface were a separate server. Autofs selects the nearest interface to the client.


Note –

No weighting and no proximity checks are performed with manual mounts. The mount command prioritizes the servers that are listed from left to right.


For more information, see automount(1M) man page.

Autofs and Weighting

You can influence the selection of servers at the same proximity level by adding a weighting value to the autofs map. For example:


/usr/man -ro oak,rose(1),willow(2):/usr/man

The numbers in parentheses indicate a weighting. Servers without a weighting have a value of zero and, therefore, are most likely to be selected. The higher the weighting value, the lower the chance that the server is selected.


Note –

All other server selection factors are more important than weighting. Weighting is only considered when selecting between servers with the same network proximity.


Variables in a Map Entry

You can create a client-specific variable by prefixing a dollar sign ($) to its name. The variable helps you to accommodate different architecture types that are accessing the same file-system location. You can also use curly braces to delimit the name of the variable from appended letters or digits. Table 6–7 shows the predefined map variables.

Table 6–7 Predefined Map Variables

Variable 

Meaning 

Derived From 

Example 

ARCH

Architecture type 

uname -m

sun4

CPU

Processor type 

uname -p

sparc

HOST

Host name 

uname -n

dinky

OSNAME

Operating system name 

uname -s

SunOS

OSREL

Operating system release 

uname -r

5.8

OSVERS

Operating system version (version of the release) 

uname -v

GENERIC

You can use variables anywhere in an entry line except as a key. For instance, suppose that you have a file server that exports binaries for SPARC and x86 architectures from /usr/local/bin/sparc and /usr/local/bin/x86 respectively. The clients can mount through a map entry such as the following:


/usr/local/bin	   -ro	server:/usr/local/bin/$CPU

Now the same entry for all clients applies to all architectures.


Note –

Most applications that are written for any of the sun4 architectures can run on all sun4 platforms. The -ARCH variable is hard-coded to sun4.


Maps That Refer to Other Maps

A map entry +mapname that is used in a file map causes automount to read the specified map as if it were included in the current file. If mapname is not preceded by a slash, autofs treats the map name as a string of characters and uses the name-service switch policy to find the map name. If the path name is an absolute path name, automount checks a local map of that name. If the map name starts with a dash (-), automount consults the appropriate built-in map, such as hosts.

This name-service switch file contains an entry for autofs that is labeled as automount, which contains the order in which the name services are searched. The following file is an example of a name-service switch file.


#
# /etc/nsswitch.nis:
#
# An example file that could be copied over to /etc/nsswitch.conf;
# it uses NIS (YP) in conjunction with files.
#
# "hosts:" and "services:" in this file are used only if the /etc/netconfig
# file contains "switch.so" as a nametoaddr library for "inet" transports.
# the following two lines obviate the "+" entry in /etc/passwd and /etc/group.
passwd:         files nis
group:          files nis

# consult /etc "files" only if nis is down.
hosts:          nis [NOTFOUND=return] files
networks:       nis [NOTFOUND=return] files
protocols:      nis [NOTFOUND=return] files
rpc:            nis [NOTFOUND=return] files
ethers:         nis [NOTFOUND=return] files
netmasks:       nis [NOTFOUND=return] files
bootparams:     nis [NOTFOUND=return] files
publickey:      nis [NOTFOUND=return] files
netgroup:       nis
automount:      files nis
aliases:        files nis
# for efficient getservbyname() avoid nis
services:       files nis 

In this example, the local maps are searched before the NIS maps. Therefore, you can have a few entries in your local /etc/auto_home map for the most commonly accessed home directories. You can then use the switch to fall back to the NIS map for other entries.


bill               cs.csc.edu:/export/home/bill
bonny              cs.csc.edu:/export/home/bonny

After consulting the included map, if no match is found, automount continues scanning the current map. Therefore, you can add more entries after a + entry.


bill               cs.csc.edu:/export/home/bill
bonny              cs.csc.edu:/export/home/bonny
+auto_home 

The map that is included can be a local file or a built-in map. Remember, only local files can contain + entries.


+auto_home_finance      # NIS+ map
+auto_home_sales        # NIS+ map
+auto_home_engineering  # NIS+ map
+/etc/auto_mystuff      # local map
+auto_home              # NIS+ map
+-hosts                 # built-in hosts map 

Note –

You cannot use + entries in NIS+ or NIS maps.


Executable Autofs Maps

You can create an autofs map that executes some commands to generate the autofs mount points. You could benefit from using an executable autofs map if you need to be able to create the autofs structure from a database or a flat file. The disadvantage to using an executable map is that the map needs to be installed on each host. An executable map cannot be included in either the NIS or the NIS+ name service.

The executable map must have an entry in the auto_master file.


/execute    auto_execute

Here is an example of an executable map:


#!/bin/ksh
#
# executable map for autofs
#

case $1 in
	         src)  echo '-nosuid,hard bee:/export1' ;;
esac

For this example to work, the file must be installed as /etc/auto_execute and must have the executable bit set. Set permissions to 744. Under these circumstances, running the following command causes the /export1 file system from bee to be mounted:


% ls /execute/src

Modifying How Autofs Navigates the Network (Modifying Maps)

You can modify, delete, or add entries to maps to meet the needs of your environment. As applications and other file systems that users require change their location, the maps must reflect those changes. You can modify autofs maps at any time. Whether your modifications are effective the next time automountd mounts a file system depends on which map you modify and what kind of modification you make.

Default Autofs Behavior With Name Services

At boot time autofs is invoked by the service svc:/system/filesystem/autofs and autofs checks for the master auto_master map. Autofs is subject to the rules that are discussed subsequently.

Autofs uses the name service that is specified in the automount entry of the /etc/nsswitch.conf file. If NIS+ is specified, as opposed to local files or NIS, all map names are used as is. If NIS is selected and autofs cannot find a map that autofs needs, but finds a map name that contains one or more underscores, the underscores are changed to dots. This change allows the old NIS file names to work. Then autofs checks the map again, as shown in Figure 6–6.

Figure 6–6 How Autofs Uses the Name Service

The context describes the graphic.

The screen activity for this session would resemble the following example.


$ grep /home /etc/auto_master
/home           auto_home

$ ypmatch brent auto_home
Can't match key brent in map auto_home.  Reason: no such map in
server's domain.

$ ypmatch brent auto.home
diskus:/export/home/diskus1/&

If “files” is selected as the name service, all maps are assumed to be local files in the /etc directory. Autofs interprets a map name that begins with a slash (/) as local regardless of which name service autofs uses.

Autofs Reference

The remaining sections of this chapter describe more advanced autofs features and topics.

Autofs and Metacharacters

Autofs recognizes some characters as having a special meaning. Some characters are used for substitutions, and some characters are used to protect other characters from the autofs map parser.

Ampersand (&)

If you have a map with many subdirectories specified, as in the following, consider using string substitutions.


john        willow:/home/john
mary        willow:/home/mary
joe         willow:/home/joe
able        pine:/export/able
baker       peach:/export/baker

You can use the ampersand character (&) to substitute the key wherever the key appears. If you use the ampersand, the previous map changes to the following:


john        willow:/home/&
mary        willow:/home/&
joe         willow:/home/&
able        pine:/export/&
baker       peach:/export/&

You could also use key substitutions in a direct map, in situations such as the following:


/usr/man						willow,cedar,poplar:/usr/man

You can also simplify the entry further as follows:


/usr/man						willow,cedar,poplar:&

Notice that the ampersand substitution uses the whole key string. Therefore, if the key in a direct map starts with a / (as it should), the slash is included in the substitution. Consequently, for example, you could not do the following:


/progs				&1,&2,&3:/export/src/progs 

The reason is that autofs would interpret the example as the following:


/progs 				/progs1,/progs2,/progs3:/export/src/progs

Asterisk (*)

You can use the universal substitute character, the asterisk (*), to match any key. You could mount the /export file system from all hosts through this map entry.


*						&:/export

Each ampersand is substituted by the value of any given key. Autofs interprets the asterisk as an end-of-file character.

Autofs and Special Characters

If you have a map entry that contains special characters, you might have to mount directories that have names that confuse the autofs map parser. The autofs parser is sensitive to names that contain colons, commas, and spaces, for example. These names should be enclosed in double-quotes, as in the following:


/vms    -ro    vmsserver: -  -  - "rc0:dk1 - "
/mac    -ro    gator:/ - "Mr Disk - "