Sun Cluster 2.2 Software Installation Guide

11.3 Administering NFS in Sun Cluster Systems

This section describes the procedures used to administer NFS in Sun Cluster systems.

11.3.1 Adding an Existing File System to a Logical Host

After Sun Cluster is running, use the following procedures to add an additional file system to a logical host.


Note -

Use caution when manually mounting multihost disk file systems that are not listed in the Sun Cluster vfstab.logicalhost and dfstab.logicalhost files. If you forget to unmount that file system, a subsequent switchover of the logical host containing that file system will fail because the device is busy. However, if that file system is listed in the appropriate Sun Cluster vfstab.logicalhost files, the software can forcefully unmount the file system, and the volume manager disk group release commands will succeed.


11.3.2 How to Add an Existing File System to a Logical Host

  1. From a cconsole(1) window, use an editor such as vi to add an entry for the file system to the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file.

    By using a cconsole(1) window, you can make changes on all potential masters of these file systems.

  2. Run the mount(1M) command to mount the new file system.

    Specify the device and mount point. Alternatively, you can wait until the next membership reconfiguration for the file system to be automatically mounted.

    Here is an example for Solstice DiskSuite.

    # mount -F ufs /dev/md/hahost1/dsk/d2 /hahost1/2
    

    Here is an example for Sun StorEdge Volume Manager.

    # mount -F vxfs /dev/vx/dsk/dg1/vol1 /vol1
    
  3. Add the Sun Cluster HA for NFS file system to the logical host.

    1. From a cconsole(1) window, use an editor such as vi to make the appropriate entry for each file system that will be shared by NFS to the vfstab.logicalhost and dfstab.logicalhost files.

      By using a cconsole(1) window, you can make changes on all potential masters of these file systems.

    2. Execute a membership reconfiguration of the servers.

      # haswitch -r
      

      Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.

      Alternatively, the file system can be shared manually. If the procedure is performed manually, the fault monitoring processes will not be started either locally or remotely until the next membership reconfiguration is performed.

11.3.3 Removing a File System From a Logical Host

Use the following procedure to remove a file system from a logical host running Sun Cluster HA for NFS.

11.3.4 How to Remove a File System From a Logical Host

  1. From a cconsole(1) window, use an editor such as vi to remove the entry for the file system from the /etc/opt/SUNWcluster/conf/hanfs/vfstab.logicalhost file.

    By using a cconsole(1) window, you can make changes on all the potential masters of these file systems.

  2. Run the umount(1M) command to unmount the file system.

  3. (Optional) Clear the associated trans device.

    1. If you are running Solstice DiskSuite, clear the trans metadevice and its mirrors using either the metaclear -r command or the metatool(1M) GUI.

    2. If you are running Sun StorEdge Volume Manager, dissociate the log subdisk from the plex.

      Refer to your volume manager documentation for more information on clearing logging devices.

11.3.5 Adding an NFS File System to a Logical Host

Use this procedure to add an NFS file system to a logical host.

11.3.6 How to Add an NFS File System to a Logical Host

  1. From a cconsole(1) window, use an editor such as vi to make the appropriate entry for each file system that will be shared by NFS to the vfstab.logicalhost and dfstab.logicalhost files.

    By using a cconsole(1) window, you can make changes on all potential masters of these file systems.

  2. Execute a membership reconfiguration of the servers.

    # haswitch -r
    

    Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.

    Alternatively, the file system can be shared manually. If the procedure is performed manually, the fault monitoring processes will not be started either locally or remotely until the next membership reconfiguration is performed.

11.3.7 Removing an NFS File System From a Logical Host

Use this procedure to remove an Sun Cluster HA for NFS file system from a logical host.

11.3.8 How to Remove an NFS File System From a Logical Host

  1. From a cconsole(1) window, use an editor such as vi to remove the entry for the file system from the /etc/opt/SUNWcluster/conf/hanfs/dfstab.logicalhost file.

    By using a cconsole(1) window, you can make changes on all potential masters of these file systems.

  2. Run the unshare(1M) command.

    The fault monitoring system will try to access the file system until the next membership reconfiguration. Errors will be logged, but a takeover of services will not be initiated by the Sun Cluster software.

  3. (Optional) Remove the file system from the logical host. If you want to retain the UFS file system for another purpose, such as a highly available DBMS file system, skip to Step 4.

    To perform this task, use the procedure described in "11.3.3 Removing a File System From a Logical Host".

  4. Execute a membership reconfiguration of the servers.

    # haswitch -r
    

    Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.

11.3.9 Changing Share Options on an NFS File System

If you use the -rw, -rw=, -ro, or -ro= options to the share -o command, NFS fault monitoring will work best if you grant access to all the physical host names or netgroups associated with all Sun Cluster servers.

If you use netgroups in the share(1M) command, add all of the Sun Cluster host names to the appropriate netgroup. Ideally, you should grant both read and write access to all the Sun Cluster host names to enable the NFS fault probes to do a complete job.


Note -

Before you change share options, read the share_nfs(1M) man page to understand which combinations of options are legal. When modifying the share options, execute your proposed new share(1M) command, interactively, as root, on the Sun Cluster server that currently masters the logical host. This will give you immediate feedback as to whether your combination of share options is legal. If the new share(1M) command fails, immediately execute another share(1M) command with the old options. After you have a new working share(1M) command, change the dfstab.logicalhostname file to incorporate the new share(1M) command.


11.3.10 How to Change Share Options on an NFS File System

  1. From a cconsole(1) window, use an editor such as vi to make the appropriate changes to the dfstab.logicalhost files.

    By using a cconsole(1) window, you can make changes on all the potential masters of these file systems.

  2. Execute a membership reconfiguration of the servers.

    # haswitch -r
    

    Refer to the Sun Cluster 2.2 System Administration Guide for more information on forcing a cluster reconfiguration.

    If a reconfiguration is not possible, you can run the share(1M) command with the new options. Some changes can cause the fault monitoring subsystem to issue messages. For instance, a change from read-write to read-only will generate messages.