JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Data Service for Network File System (NFS) Guide
search filter icon
search icon

Document Information

Preface

1.  Installing and Configuring HA for NFS

Overview of the Installation and Configuration Process for HA for NFS

Planning the HA for NFS Installation and Configuration

Service Management Facility Restrictions

NFSV3 Restrictions

Loopback File System Restrictions

Zettabyte File System (ZFS) Restrictions

Installing the HA for NFS Packages

How to Install the HA for NFS Packages

Registering and Configuring HA for NFS

Setting HA for NFS Extension Properties

Tools for Registering and Configuring HA for NFS

How to Register and Configure the Oracle Solaris Cluster HA for NFS by Using clsetup

How to Register and Configure HA for NFS by using Oracle Solaris Cluster Command Line Interface (CLI)

How to Change Share Options on an NFS File System

How to Dynamically Update Shared Paths on an NFS File System

How to Tune HA for NFS Method Timeouts

Configuring SUNW.HAStoragePlus Resource Type

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported UNIX File System Using the Command Line Interface

How to Set Up the HAStoragePlus Resource Type for an NFS-Exported Zettabyte File System

Securing HA for NFS With Kerberos V5

How to Prepare the Nodes

How to Create Kerberos Principals

Enabling Secure NFS

Tuning the HA for NFS Fault Monitor

Fault Monitor Startup

Fault Monitor Stop

Operations of HA for NFS Fault Monitor During a Probe

NFS System Fault Monitoring Process

NFS Resource Fault Monitoring Process

Monitoring of File Sharing

Upgrading the SUNW.nfs Resource Type

Information for Registering the New Resource Type Version

Information for Migrating Existing Instances of the Resource Type

A.  HA for NFS Extension Properties

Index

Registering and Configuring HA for NFS

This section describes how to register and configure HA for NFS.


Note - Other options also enable you to register and configure the data service. See Tools for Data Service Resource Administration in Oracle Solaris Cluster Data Services Planning and Administration Guide for details about these options.


Before you register and configure HA for NFS, run the following command to verify that the HA for NFS package, SUNWscnfs, is installed on the cluster.

# pkginfo -l SUNWscnfs

If the package has not been installed, see Installing the HA for NFS Packages for instructions on how to install the package.

Setting HA for NFS Extension Properties

The sections that follow contain instructions for registering and configuring resources. For information about the HA for NFS extension properties, see Appendix A, HA for NFS Extension Properties. The Tunable entry indicates when you can update a property.

To set an extension property of a resource, include the following option in the clresource(1CL) command that creates or modifies the resource:

-p property=value 
-p property

Identifies the extension property that you are setting.

value

Specifies the value to which you are setting the extension property.

You can also use the procedures in Chapter 2, Administering Data Service Resources, in Oracle Solaris Cluster Data Services Planning and Administration Guide to configure resources after the resources are created.

Tools for Registering and Configuring HA for NFS

Oracle Solaris Cluster provides the following tools for registering and configuring HA for NFS:

The clsetup utility and Oracle Solaris Cluster Manager each provide a wizard for configuring HA for NFS. The wizards reduce the possibility for configuration errors that might result from command syntax errors or omissions. These wizards also ensure that all required resources are created and that all required dependencies between resources are set.

How to Register and Configure the Oracle Solaris Cluster HA for NFS by Using clsetup

Perform this procedure during your initial set up of Oracle Solaris Cluster HA for NFS. Perform this procedure from one node only.


Note - The following instructions explain how to perform this operation by using the clsetup utility.


Before You Begin

Before you start the Oracle Solaris Cluster HA for NFS wizard, ensure that the following prerequisites are met:

  1. Become superuser on any cluster node.
  2. Start the clsetup utility.
    # clsetup

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring Oracle Solaris Cluster HA for NFS and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  5. Verify that the prerequisites are met, and press Return.

    The clsetup utility displays a list of all cluster nodes that are online.

  6. Select the nodes where you require Oracle Solaris Cluster HA for NFS to run.
    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.
    • To select a subset of the listed nodes, type a comma-separated or space-separated list of the numbers that correspond to the nodes and press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the resource group's node list. The first node in the list is the primary node of this resource group.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the resource group's node list. The first node in the list is the primary node of this resource group.

  7. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a list of existing logical hostname resources on the cluster.

  8. You can choose any of the following options.
    • To use the existing logical hostname resource, type the number that corresponds to the required logical hostname and press Return. Go to Step 11.
    • To create a new logical hostname resource, type c and press Return. Go to Step 9
  9. Type the logical hostname and press Return.

    The clsetup utility displays a list of existing logical hostname resources.

  10. Type the number that corresponds to the logical hostname resource to be created and press Return.

    The clsetup creates the logical hostname resource.

  11. To confirm your selection of logical hostname resource, type d and press Return.

    The clsetup utility displays information about file system mount points.

  12. Press Return to continue.

    The clsetup utility displays the existing file system mount points.

  13. Select the file system mount points for Oracle Solaris Cluster HA for NFS data files.
    • To select a subset of the listed file system mount points, type a comma-separated or space-separated list of the numbers that correspond to the file system node point and press Return.
    • To select all file system mount points in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the file system mount points and press Return.
  14. To confirm your selection of file system mount points, type d and press Return.

    The clsetup utility displays a screen where you can specify the path prefix for Oracle Solaris Cluster HA for NFS resource group.

  15. Select the path prefix for Oracle Solaris Cluster HA for NFS resource group and press Return.

    The clsetup utility displays a screen where you can change the share option for the file system mount point that the NFS server is sharing.

  16. Select the share option and press Return.

    The clsetup utility displays the share options for the selected mount points.

  17. If you require a different name for any Oracle Solaris Cluster objects, change each name as follows.
    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Oracle Solaris Cluster objects that the utility will create.

  18. To confirm your selection of Oracle Solaris Cluster object names, type d and press Return.

    The clsetup utility displays information about the Oracle Solaris Cluster configuration that the utility will create.

  19. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  20. Press Return to continue.

    The clsetup utility returns you to the Data Services Menu.

  21. Type q and press Return.

    The clsetup utility returns you to the Main Menu.

  22. (Optional) Type q and press Return to quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your Oracle Solaris Cluster HA for NFS resource group when you restart the utility.

  23. Determine if the Oracle Solaris Cluster HA for NFS resource group and its resources are online.

    Use the clresourcegroup(1CL) utility for this purpose. By default, the clsetup utility assigns the name nfs-mountpoint-admin-rg to the Oracle Solaris Cluster HA for NFS resource group.

    # clresourcegroup status nfs-mountpoint-admin-rg
  24. If the Oracle Solaris Cluster HA for NFS resource group and its resources are not online, bring them online.
    # clresourcegroup online nfs-rg

How to Register and Configure HA for NFS by using Oracle Solaris Cluster Command Line Interface (CLI)

Before You Begin

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
  2. Create the Pathprefix directory.

    Create a Pathprefix directory on the HA file system (global file system or failover file system). HA for NFS resources will use this directory to maintain administrative information.

    You can specify any directory for this purpose. However, you must manually create a Pathprefix directory for each resource group that you create.

    # mkdir -p Pathprefix-directory
  3. Create a failover resource group to contain the NFS resources.
    # clresourcegroup create [-n nodelist] -p Pathprefix=Pathprefix-directory resource-group
    [-n nodelist]

    Specifies an optional, comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the Resource Group Manager (RGM) considers primary nodes during failover.

    -p Pathprefix=Pathprefix-directory

    Specifies a directory that resources in this resource group will use to maintain administrative information. This is the directory that you created in Step 2.

    resource-group

    Specifies the failover resource group.

  4. Verify that you have added all your logical hostname resources to the name service database.

    To avoid any failures because of name service lookups, verify that all IP addresses to hostname mappings that are used by Oracle Solaris Cluster HA for NFS are present in the server's and client's /etc/inet/hosts file.

  5. Modify the hosts entry in /etc/nsswitch.conf so that resolving a name locally the host does not first contact NIS or DNS, but instead immediately returns a successful status.

    This modification enables HA-NFS to fail over correctly in the presence of public network failures.

    # hosts: cluster files [SUCCESS=return] nis
    # rpc: files nis
  6. (Optional) Customize the nfsd or lockd startup options.
    1. To customize nfsd options, on each cluster node open the /etc/init.d/nfs.server file, find the command-line starting with /usr/lib/nfs/nfsd, and add any additional arguments desired.

      In Solaris 10, to customize nfsd options, open the /etc/default/nfs file and edit the NFSD_SERVERS variable.

    2. To customize lockd startup options, on each cluster node open the /etc/init.d/nfs.client file, find the command line starting with/usr/lib/nfs/lockd, and add any command-line arguments desired.

      You can set the lockd grace period with the LOCKD_GRACE_PERIOD parameter in the /etc/default/nfs file. However, if the grace period is set in a command-line argument in the /etc/init.d/nfs.client file, this will override the value set in LOCKD_GRACE_PERIOD.


    Note - The command lines must remain limited to a single line. Breaking the command line into multiple lines is not supported. The additional arguments must be valid options documented in man pages nfsd(1M) and lockd(1M).


  7. Add the desired logical hostname resources into the failover resource group.

    You must set up a logical hostname resource with this step. The logical hostname that you use with Oracle Solaris Cluster HA for NFS cannot be a SharedAddress resource type.

    # clreslogicalhostname create -g resource-group -h logical-hostname, … [-N netiflist] lhresource
    -g resource-group

    Specifies the resource group that is to hold the logical hostname resources.

    -h logical-hostname, …

    Specifies the logical hostname resource to be added.

    -N netiflist

    Specifies an optional, comma-separated list that identifies the IP Networking Multipathing groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be used as an IP Networking Multipathing group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.


    Note - If you require a fully qualified hostname, you must specify the fully qualified name with the -h option and you cannot use the fully qualified form in the resource name.



    Note - Oracle Solaris Cluster does not currently support using the adapter name for netif.


  8. From any cluster node, create the SUNW.nfs subdirectory.

    Create a subdirectory called SUNW.nfs below the directory that the Pathprefix property identifies in Step 3.

    # mkdir Pathprefix-directory/SUNW.nfs
  9. Create a dfstab.resource file in the SUNW.nfs directory that you created in Step 8, and set up share options.
    1. Create the Pathprefix/SUNW.nfs/dfstab.resource file.

      This file contains a set of share commands with the shared path names. The shared paths should be subdirectories on a cluster file system.


      Note - Choose a resource name suffix to identify the NFS resource that you plan to create (in Step 11). A good resource name refers to the task that this resource is expected to perform. For example, a name such as user-nfs-home is a good candidate for an NFS resource that shares user home directories.


    2. Set up the share options for each path that you have created to be shared.

      The format of this file is exactly the same as the format that is used in the /etc/dfs/dfstab file.

      # share -F nfs [-o specific_options] [-d “description] pathname
      -F nfs

      Identifies the file system type as nfs.

      -o specific_options

      Grants read-write access to all the clients. See the share(1M) man page for a list of options. Set the rw option for Oracle Solaris Cluster.

      -d description

      Describes the file system to add.

      pathname

      Identifies the file system to share.


      Note - If you want to share multiple paths, the above share command need to be repeated for each path that you are sharing.


    When you set up your share options, consider the following points.

    • When constructing share options, do not use the root option, and do not mix the ro and rw options.

    • Do not grant access to the hostnames on the cluster interconnect.

      Grant read and write access to all the cluster nodes and logical hosts to enable the Oracle Solaris Cluster HA for NFS monitoring to do a thorough job. However, you can restrict write access to the file system or make the file system entirely read-only. If you do so, Oracle Solaris Cluster HA for NFS fault monitoring can still perform monitoring without having write access.

    • If you specify a client list in the share command, include all the physical hostnames and logical hostnames that are associated with the cluster. Also include the hostnames for all the clients on all the public networks to which the cluster is connected.

    • If you use net groups in the share command (rather than names of individual hosts), add all those cluster hostnames to the appropriate net group.

    The share -o rw command grants write access to all the clients, including the hostnames that the Oracle Solaris Cluster software uses. This command enables Oracle Solaris Cluster HA for NFS fault monitoring to operate most efficiently. See the following man pages for details.

    • dfstab(4)

    • share(1M)

    • share_nfs(1M)

  10. Register the NFS resource type.
    # clresourcetype register resource-type
    resource-type

    Adds the specified resource type. For Oracle Solaris Cluster HA for NFS, the resource type is SUNW.nfs.

  11. Create the NFS resource in the failover resource group.
    # clresource create -g resource-group -t resource-type resource
    -g resource-group

    Specifies the name of a previously created resource group to which this resource is to be added.

    -t resource-type

    Specifies the name of the resource type to which this resource belongs. This name must be the name of a registered resource type.

    resource

    Specifies the name of the resource to add, which you defined in Step 9. This name can be your choice but must be unique within the cluster.

    The resource is created in the enabled state.

  12. Run the clresourcegroup(1CL) command to manage the resource group.
    # clresourcegroup online -M resource-group

Example 1-1 Setting Up and Configuring HA for NFS

The following example shows how to set up and configure HA for NFS.

  1. To create a logical host resource group and specify the path to the administrative files used by NFS (Pathprefix), the following command is run.

    # clresourcegroup create -p Pathprefix=/global/nfs resource-group-1
  2. To add logical hostname resources into the logical host resource group, the following command is run.

    # clreslogicalhostname create -g resource-group-1 -h schost-1 lhresource
  3. To make the directory structure contain the Oracle Solaris Cluster HA for NFS configuration files, the following command is run.

    # mkdir -p /global/nfs/SUNW.nfs
  4. To create the dfstab.resource file under the nfs/SUNW.nfs directory and set share options, the following command is run.

    # share -F nfs -o rw=engineering -d “home dirs” /global/nfs/SUNW.nfs

    Note - You also need to add this entry to the dfstab.resource file.


  5. To register the NFS resource type, the following command is run.

    # clresourcetype register SUNW.nfs
  6. To create the NFS resource in the resource group, the following command is run.

    # clresource create -g resource-group-1 -t SUNW.nfs r-nfs

    The resource is created in the enabled state.

  7. To enable the resources and their monitors, manage the resource group, and switch the resource group into online state, the following command is run.

    # clresourcegroup online -M resource-group-1

How to Change Share Options on an NFS File System

If you use the rw, rw=, ro, or ro= options to the share -o command, NFS fault monitoring works best if you grant access to all the physical hosts or netgroups that are associated with all the Oracle Solaris Cluster servers.

If you use netgroups in the share(1M) command, add all the Oracle Solaris Cluster hostnames to the appropriate netgroup. Ideally, grant both read access and write access to all the Oracle Solaris Cluster hostnames to enable the NFS fault probes to do a complete job.


Note - Before you change share options, read the share_nfs(1M) man page to understand which combinations of options are legal.


You can also modify the shared path and options dynamically without bringing offline the Oracle Solaris Cluster HA for NFS resource. See How to Dynamically Update Shared Paths on an NFS File System.

To modify the share options on an NFS file system while the Oracle Solaris Cluster HA for NFS resource is offline, perform the following steps.

  1. On a cluster member, become superuser or assume a role that provides solaris.cluster.admin RBAC authorization.
  2. Turn off fault monitoring on the NFS resource.
    # clresource unmonitor resource
  3. Test the new share options.
    1. Before you edit the dfstab.resource file with new share options, execute the new share command to verify the validity of your combination of options.
      # share -F nfs [-o specific_options] [-d “description] pathname
      -F nfs

      Identifies the file system type as NFS.

      -o specific_options

      Specifies an option. You might use rw, which grants read-write access to all the clients.

      -d description

      Describes the file system to add.

      pathname

      Identifies the file system to share.

    2. If the new share command fails, immediately execute another share command with the old options. When the new command executes successfully, proceed to Step 4.
  4. Edit the dfstab.resource file with the new share options.
    1. To remove a path from the dfstab.resource file, perform the following steps in order.
      1. Execute the unshare(1M) command.
        # unshare -F nfs  [-o specific_options] pathname
        -F nfs

        Identifies the file system type as NFS.

        -o specific_options

        Specifies the options that are specific to NFS file systems.

        pathname

        Identifies the file system that is made unavailable.

      2. From the dfstab.resource file, delete the share command for the path that you want to remove.
        # vi dfstab.resource
    2. To add a path or change an existing path in the dfstab.resource file, verify that the mount point is valid, then edit the dfstab.resource file.

    Note - The format of this file is exactly the same as the format that is used in the /etc/dfs/dfstab file. Each line consists of a share command.


  5. Enable fault monitoring on the NFS resource.
    # clresource monitor resource

How to Dynamically Update Shared Paths on an NFS File System

You can dynamically modify the share command on an NFS file system without bringing offline the Oracle Solaris Cluster HA for NFS resource. The general procedure consists of modifying the dfstab.resource file for Oracle Solaris Cluster HA for NFS and then manually running the appropriate command, either the share command or the unshare command. The command is immediately effective, and Oracle Solaris Cluster HA for NFS handles making these paths highly available.

Ensure that the paths that are shared are always available to Oracle Solaris Cluster HA for NFS during failover so that local paths (on non-HA file systems) are not used.

If paths on a file system that is managed by HAStoragePlus are shared, the HAStoragePlus resource must be in the same resource group as the Oracle Solaris Cluster HA for NFS resource, and the dependency between them must be set correctly.

  1. Use the cluster status command to find out the node on which the Oracle Solaris Cluster HA for NFS resource is online.
  2. On this node run the /usr/sbin/share command to see the list of paths currently shared. Determine the changes to make to this list.
  3. To add a new shared path, perform the following steps.
    1. Add the share command to the dfstab.resource file.

      Oracle Solaris Cluster HA for NFS shares the new path the next time it checks the file. The frequency of these checks is controlled by the Thorough_Probe_Interval property (by default 120 seconds).

    2. Run the share command manually to make the newly added shared path effective immediately. Running the command manually is recommended because the user can be certain that the shared paths are available to potential clients. Oracle Solaris Cluster HA for NFS detects that the newly added path is already shared and does not report an error.
  4. To unshare a path, perform the following steps.
    1. Run the dfmounts(1M) command to ensure that no clients are currently using the path.

      Although a path can be unshared even if clients are using it, these clients would receive a stale file error handle and would need special care (forced unmount, or even reboot) to recover.

    2. Remove the share command from the dfstab.resource file.
    3. Run the unshare command manually.
  5. To modify options for an existing shared path, perform the following steps.
    1. Modify the dfstab.resource file as needed.
    2. Run the appropriate command (share or unshare) manually.

How to Tune HA for NFS Method Timeouts

The time that HA for NFS methods require to finish depends on the number of paths that the resources share through the dfstab.resource file. The default timeout for these methods is 300 seconds.

As a general guideline, allocate 10 seconds toward the method timeouts for each path that is shared. Default timeouts are designed to handle 30 shared paths.

Update the following method timeouts if the number of shared paths is greater than 30.

To change method timeouts, use the scrgadm -c option, as in the following example.

% clresource set -p Prenet_start_timeout=500 resource