This chapter provides instructions for setting up and administering the Sun Cluster HA for NFS data service.
This chapter includes the following sections:
This chapter describes the steps necessary to configure and run Sun Cluster HA for NFS on your Sun Cluster servers. It also describes the steps necessary to add Sun Cluster HA for NFS to a system that is already running Sun Cluster.
Before beginning the tasks in this chapter, see Chapter 2, Planning the Configuration, for more information on setting up file systems. Refer to "Configuration Restrictions", for HA-NFS configuration restrictions.
To avoid any failures due to name service lookup, all logical host names should be present in the server's and client's /etc/hosts file. Name service mapping on the servers should be configured to look first at the local files before trying to access NIS or NIS+. This prevents timing related errors in this area and ensures that ifconfig and statd do not fail in resolving logical host names.
Table 11-1 shows the high-level steps to configure Sun Cluster HA for NFS to work with Sun Cluster. The tasks should be performed in the order shown.
Table 11-1 High-Level Steps to Configure Sun Cluster HA for NFS
Task |
Go To ... |
---|---|
Updating the name service with logical host names |
Step 4 in the procedure "How to Install the Server Software". (This should have been done prior to running the scinstall(1M) command.) |
Modifying name service lookups in the /etc/nsswitch.conf file to access /etc files first |
Step 7 in the procedure "How to Install the Server Software". (This should have been done prior to running the scinstall(1M) command.) |
Initializing NAFO |
Step 21 in the procedure "How to Install the Server Software". (This should have been done as part of the scinstall(1M) process.) |
Setting up logical hosts |
Step 22 in the procedure "How to Install the Server Software". (This should have been done as part of the scinstall(1M) process.) |
Assigning net names and disk groups |
Step 25 in the procedure "How to Install the Server Software". (This should have been done as part of the scinstall(1M) process.) |
Configuring the volume manager |
For Solstice DiskSuite configurations, refer to Appendix B, Configuring Solstice DiskSuite. For VERITAS Volume Manager configurations, refer to Appendix C, Configuring VERITAS Volume Manager. |
Creating NAFO backup groups |
Step 9 in the procedure "How to Configure the Cluster". (This should have been done as part of the installation process.) |
Creating multihost file systems |
For Solstice DiskSuite configurations, refer to Appendix B, Configuring Solstice DiskSuite. For VERITAS Volume Manager configurations, refer to Appendix C, Configuring VERITAS Volume Manager. |
Editing the dfstab.logicalhost files |
Refer to "How to Share NFS File Systems". |
Registering Sun Cluster HA for NFS |
Refer to "How to Register and Activate NFS". |
The mount points for NFS file systems placed under the control of Sun Cluster HA for NFS must be the same on all nodes that are capable of mastering the logical host containing those file systems.
To avoid "stale file handle" errors on the client during NFS failovers, make sure that the VERITAS Volume Manager vxio driver has identical pseudo-device major numbers on all cluster nodes. This number can be found in the /etc/name_to_major file after you complete the installation. See Appendix C, Configuring VERITAS Volume Manager, for pseudo-device major number administrative procedures.
This section describes the procedures used to set up file systems to be shared by NFS by editing the logical host's dfstab.logicalhost files.
Before you set up file systems to be shared by NFS, make sure you have configured your logical hosts. When you first configure the cluster, you provide the scinstall(1M) command with information about your logical host configuration. Once the cluster is up, you can configure logical hosts by running either the scinstall(1M) or scconf(1M) commands.
NFS file systems are not shared until you perform a cluster reconfiguration as outlined in "How to Register and Activate NFS".
Create the multihost file systems.
Use the procedures described in Appendix B, Configuring Solstice DiskSuite, and in Appendix C, Configuring VERITAS Volume Manager, to create the multihost file systems.
Verify that all nodes in the cluster are up and running.
From a cconsole(1) window, use an editor such as vi to create and edit the /etc/opt/SUNWcluster/conf/hanfs/dfstab.logicalhost file.
By using a cconsole(1) window, you can make changes on all the potential masters of these file systems. You can also update dfstab.logicalhost on one node and use rcp(1) to copy it to all other cluster nodes that are potential masters of the file systems. Add entries for all files systems created in Step 1 that will be shared.
The dfstab.logicalhost file is in dfstab format. The file contains share(1M) commands in this syntax.
share [-F fstype] [-o options] [-d "<text>"] <pathname> [resource] |
If you use the rw, rw=, ro, or ro= options to the share -o command, grant access to all hostnames that Sun Cluster uses. This enables Sun Cluster HA for NFS fault monitoring to operate most efficiently. Include all physical and logical hostnames that are associated with the Sun Cluster, as well as the hostnames on all public networks to which the Sun Cluster is connected.
If you use netgroups in the share command (rather than names of individual hosts), add all those cluster hostnames to the appropriate netgroup.
Do not grant access to the hostnames on the private nets.
Grant read and write access to all the hosts' hostnames, to enable the HA-NFS monitoring to do a thorough job. However, you can restrict write access to the file system, or make the file system entirely read-only. In this case, Sun Cluster HA for NFS fault monitoring will still be able to perform monitoring without having write access.
The resulting file will look similar to this example, which shows the logical host name (hahost1), the file system type (nfs), and the mount point names (/hahost1/1 and /hahost1/2).
share -F nfs -d "hahost1 fs 1" /hahost1/1 share -F nfs -d "hahost1 fs 2" /hahost1/2 |
When constructing share options, generally avoid using the root option, and avoid mixing ro and rw options.
(Optional) Create the file .probe_nfs_file in each directory to be NFS-shared.
For enhanced fault monitoring, each directory exported by Sun Cluster HA for NFS (that is, each directory listed in the dfstab files for Sun Cluster HA for NFS) should contain the file .probe_nfs_file. For each such directory, cd to the directory and create the file using the touch(1) command:
Do this on the physical host that currently masters the logical host for that dfstab file.
phys-hahost1# touch .probe_nfs_file |
After completing these steps, register and activate NFS using the procedure "How to Register and Activate NFS".
After setting up and configuring NFS, you must activate Sun Cluster HA for NFS by using the hareg(1M) command to start the Sun Cluster monitor.
Register Sun Cluster HA for NFS.
Use the hareg(1M) command to register the Sun Cluster HA for NFS data service on all hosts in the Sun Cluster. Run the command on only one node.
# hareg -s -r nfs |
The following command registers the Sun Cluster HA for NFS data service only on logical hosts hahost1 and hahost2. Run the command on only one node.
# hareg -s -r nfs -h hahost1,hahost2 |
Activate the NFS service by invoking the hareg(1M) command on one host.
# hareg -y nfs |
Execute a membership reconfiguration.
# haswitch -r |
Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.
The process of setting up, registering, and activating Sun Cluster HA for NFS on your Sun Cluster servers is now complete.
Create and edit the /etc/opt/SUNWcluster/conf/hanfs/dfstab.logicalhost file.
Follow the instructions in "How to Share NFS File Systems" to edit the dfstab file.
Register and activate NFS.
Follow the instructions in "How to Register and Activate NFS".
This section describes the procedures used to administer NFS in Sun Cluster systems.
After Sun Cluster is running, use the following procedures to add an additional file system to a logical host.
Use care when manually mounting multihost disk file systems that are not listed in the Sun Cluster vfstab.logicalhost and dfstab.logicalhost files. If you forget to unmount that file system, a subsequent switchover of the logical host containing that file system will fail because the device is busy. However, if that file system is listed in the appropriate Sun Cluster vfstab.logicalhost files, the software can forcefully unmount the file system, and the volume manager disk group release commands will succeed.
From a cconsole(1) window, use an editor such as vi to add an entry for the file system to the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file.
By using a cconsole(1) window, you can make changes on all potential masters of these file systems.
Run the mount(1M) command to mount the new file system.
Specify the device and mount point. Alternatively, you can wait until the next membership reconfiguration for the file system to be automatically mounted.
Here is an example for Solstice DiskSuite.
# mount -F ufs /dev/md/hahost1/dsk/d2 /hahost1/2 |
Here is an example for VERITAS Volume Manager.
# mount -F vxfs /dev/vx/dsk/dg1/vol1 /vol1 |
Add the Sun Cluster HA for NFS file system to the logical host.
From a cconsole(1) window, use an editor such as vi to make the appropriate entry for each file system that will be shared by NFS to the vfstab.logicalhost and dfstab.logicalhost files.
By using a cconsole(1) window, you can make changes on all potential masters of these file systems.
Execute a membership reconfiguration of the servers.
# haswitch -r |
Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.
Alternatively, the file system can be shared manually. If the procedure is performed manually, the fault monitoring processes will not be started either locally or remotely until the next membership reconfiguration is performed.
Use the following procedure to remove a file system from a logical host running Sun Cluster HA for NFS.
From a cconsole(1) window, use an editor such as vi to remove the entry for the file system from the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file.
By using a cconsole(1) window, you can make changes on all the potential masters of these file systems.
Run the umount(1M) command to unmount the file system.
(Optional) Clear the associated trans device.
Refer to your volume manager documentation for more information on clearing logging devices.
Use this procedure to add an NFS file system to a logical host.
From a cconsole(1) window, use an editor such as vi to make the appropriate entry for each file system that will be shared by NFS to the vfstab.logicalhost and dfstab.logicalhost files.
By using a cconsole(1) window, you can make changes on all potential masters of these file systems.
Execute a membership reconfiguration of the servers.
# haswitch -r |
Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.
Alternatively, the file system can be shared manually. If the procedure is performed manually, the fault monitoring processes will not be started either locally or remotely until the next membership reconfiguration is performed.
Use this procedure to remove an Sun Cluster HA for NFS file system from a logical host.
From a cconsole(1) window, use an editor such as vi to remove the entry for the file system from the /etc/opt/SUNWcluster/conf/hanfs/dfstab.logicalhost file.
By using a cconsole(1) window, you can make changes on all potential masters of these file systems.
The fault monitoring system will try to access the file system until the next membership reconfiguration. Errors will be logged, but a takeover of services will not be initiated by the Sun Cluster software.
(Optional) Remove the file system from the logical host. If you want to retain the UFS file system for another purpose, such as a highly available DBMS file system, skip to Step 4.
To perform this task, use the procedure described in "Removing a File System From a Logical Host".
Execute a membership reconfiguration of the servers.
# haswitch -r |
Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.
If you use the rw, rw=, ro, or ro= options to the share -o command, NFS fault monitoring will work best if you grant access to all the physical host names or netgroups associated with all Sun Cluster servers.
If you use netgroups in the share(1M) command, add all of the Sun Cluster host names to the appropriate netgroup. Ideally, you should grant both read and write access to all the Sun Cluster host names to enable the NFS fault probes to do a complete job.
Before you change share options, read the share_nfs(1M) man page to understand which combinations of options are legal. When modifying the share options, execute your proposed new share(1M) command, interactively, as root, on the Sun Cluster server that currently masters the logical host. This will give you immediate feedback as to whether your combination of share options is legal. If the new share(1M) command fails, immediately execute another share(1M) command with the old options. After you have a new working share(1M) command, change the dfstab.logicalhostname file to incorporate the new share(1M) command.
From a cconsole(1) window, use an editor such as vi to make the appropriate changes to the dfstab.logicalhost files.
By using a cconsole(1) window, you can make changes on all the potential masters of these file systems.
Execute a membership reconfiguration of the servers.
# haswitch -r |
Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.
If a reconfiguration is not possible, you can run the share(1M) command with the new options. Some changes can cause the fault monitoring subsystem to issue messages. For instance, a change from read-write to read-only will generate messages.