Before You Begin
Verify that all the cluster nodes are online.
# clnode status
Ensure that the HA for NFS package is installed.
Ensure that the /etc/netmasks file has IP-address subnet and netmask entries for all logical hostnames. If necessary, edit the /etc/netmasks file to add any missing entries.
The example output shows a configuration that uses the NIS external naming service.
The following example shows the correct lookup entries.
# svccfg -s svc:/system/name-service/switch listprop config/host hosts: cluster files [SUCCESS=return] nis # svccfg -s svc:/system/name-service/switch listprop config/rpc rpc: files nis
For rpc, files must precede any directory or name service. This configuration prevents timing-related errors for rpc lookups during periods of public network or name service unavailability.
For hosts:
# svccfg -s svc:/system/name-service/switch \ setprop config/host = astring: \"cluster files [SUCCESS=return] nis\"
For rpc:
# svccfg -s svc:/system/name-service/switch \ setprop config/rpc = astring: \"files nis\"
Create a Pathprefix directory on the HA file system (cluster file system or highly available local file system). HA for NFS resources will use this directory to maintain administrative information.
You can specify any directory for this purpose. However, you must manually create a Pathprefix directory for each resource group that you create. Additionally, ensure that directory permissions are executable by at least the file owner.
# mkdir -p Pathprefix-directory # chmod 755 Pathprefix-directory
# clresourcegroup create [–n nodelist] -p Pathprefix=Pathprefix-directory resource-group
Specifies an optional, comma-separated list of physical node names or IDs that identify potential masters. The order here determines the order in which the Resource Group Manager (RGM) considers primary nodes during failover.
Specifies a directory that resources in this resource group will use to maintain administrative information. This is the directory that you created in Step 3.
Specifies the failover resource group.
To avoid any failures because of name service lookups, verify that all IP addresses to hostname mappings that are used by HA for NFS are present in the server's and client's /etc/inet/hosts file.
Use the sharectl command to customize the nfsd and lockd options. For more information, see the nfsd (1M) , lockd (1M) , and sharectl (1M) man pages.
You must set up a logical hostname resource with this step. The logical hostname that you use with HA for NFS cannot be a SharedAddress resource type.
# clreslogicalhostname create -g resource-group -h logical-hostname, … [-N netiflist] lhresource
Specifies the resource group that is to hold the logical hostname resources.
Specifies the logical hostname resource to be added.
Specifies an optional, comma-separated list that identifies the IPMP groups that are on each node. Each element in netiflist must be in the form of netif@node. netif can be used as an IPMP group name, such as sc_ipmp0. The node can be identified by the node name or node ID, such as sc_ipmp0@1 or sc_ipmp@phys-schost-1.
Create a subdirectory called SUNW.nfs below the directory that the Pathprefix property identifies in Step 4.
# mkdir Pathprefix-directory/SUNW.nfs
This file contains a set of share commands with the shared path names. The shared paths should be subdirectories on a cluster file system.
The format of this file is exactly the same as the format that is used in the /etc/dfs/dfstab file.
# share -F nfs [-o specific_options] [-d “description”] pathname
Identifies the file system type as nfs.
Grants read-write access to all the clients. See the share (1M) man page for a list of options. Set the rw option for Oracle Solaris Cluster.
Describes the file system to add.
Identifies the file system to share.
When you set up your share options, consider the following points.
When constructing share options, do not use the root option, and do not mix the ro and rw options.
Do not grant access to the hostnames on the cluster interconnect.
Grant read and write access to all the cluster nodes and logical hosts to enable HA for NFS monitoring to do a thorough job. However, you can restrict write access to the file system or make the file system entirely read-only. If you do so, HA for NFS fault monitoring can still perform monitoring without having write access.
If you specify a client list in the share command, include all the physical hostnames and logical hostnames that are associated with the cluster. Also include the hostnames for all the clients on all the public networks to which the cluster is connected.
If you use net groups in the share command, rather than names of individual hosts, add all those cluster hostnames to the appropriate net group.
The share -o rw command grants write access to all the clients, including the hostnames that the Oracle Solaris Cluster software uses. This command enables HA for NFS fault monitoring to operate most efficiently. See the following man pages for details:
# clresourcetype register resource-type
Adds the specified resource type. For HA for NFS, the resource type is SUNW.nfs.
# clresource create -g resource-group -t resource-type resource
Specifies the name of a previously created resource group to which this resource is to be added.
Specifies the name of the resource type to which this resource belongs. This name must be the name of a registered resource type.
Specifies the name of the resource to add, which you defined in Step 9. This name can be your choice but must be unique within the cluster.
The resource is created in the enabled state.
# clresourcegroup online -M resource-group
The following example shows how to set up and configure HA for NFS.
To create a logical host resource group and specify the path to the administrative files used by NFS (Pathprefix), the following command is run.
# clresourcegroup create -p Pathprefix=/global/nfs resource-group-1
To add logical hostname resources into the logical host resource group, the following command is run.
# clreslogicalhostname create -g resource-group-1 -h schost-1 lhresource
To make the directory structure contain the HA for NFS configuration files, the following command is run.
# mkdir -p /global/nfs/SUNW.nfs
To create the dfstab.resource file under the nfs/SUNW.nfs directory and set share options, the following command is run.
# share -F nfs -o rw=engineering -d “home dirs” /global/nfs/SUNW.nfs
To register the NFS resource type, the following command is run.
# clresourcetype register SUNW.nfs
To create the NFS resource in the resource group, the following command is run.
# clresource create -g resource-group-1 -t SUNW.nfs r-nfs
The resource is created in the enabled state.
To enable the resources and their monitors, manage the resource group, and switch the resource group into online state, the following command is run.
# clresourcegroup online -M resource-group-1