Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Enabling Highly Available Local File Systems

Using a highly available local file system improves the performance of I/O intensive data services. To make a local file system highly available in a Sun Cluster environment, use the HAStoragePlus resource type.

The instructions for each Sun Cluster data service that is I/O intensive explain how to configure the data service to operate with the HAStoragePlus resource type. For more information, see the individual Sun Cluster data service guides.

For instructions for setting up the HAStoragePlus resource type for an NFS-exported file system, see How to Set Up the HAStoragePlus Resource Type for an NFS-Exported File System.


Note –

Do not use the HAStoragePlus resource type to make a root file system highly available.


Configuration Requirements for Highly Available Local File Systems

Any file system on multihost disks must be accessible from any host that is directly connected to those multihost disks. To meet this requirement, configure the highly available local file system as follows:


Note –

The use of a volume manager with the global devices for a highly available local file system is optional.


Format of Device Names for Devices Without a Volume Manager

If you are not using a volume manger, use the appropriate format for the name of the underlying storage device. The format to use depends on the type of storage device as follows:

The replaceable items in these names are as follows:

Sample Entries in /etc/vfstab for Highly Available Local File Systems

The following examples show entries in the /etc/vfstab file for global devices that are to be used for highly available local file systems.


Example 2–27 Entries in /etc/vfstab for a Global Device Without a Volume Manager

This example shows entries in the /etc/vfstab file for a global device on a physical disk without a volume manager.

/dev/global/dsk/d1s0       /dev/global/rdsk/d1s0
/global/local-fs/nfs  ufs     5  no     logging


Example 2–28 Entries in /etc/vfstab for a Global Device With Solaris Volume Manager

This example shows entries in the /etc/vfstab file for a global device that uses Solaris Volume Manager.

/dev/md/kappa-1/dsk/d0   /dev/md/kappa-1/rdsk/d0
/global/local-fs/nfs ufs     5  no     logging


Example 2–29 Entries in /etc/vfstab for a Global Device With VxVM

This example shows entries in the /etc/vfstab file for a global device that uses VxVM.


/dev/vx/dsk/kappa-1/appvol    /dev/vx/rdsk/kappa-1/appvol
/global/local-fs/nfs vxfs     5 no     log

ProcedureHow to Set Up the HAStoragePlus Resource Type for an NFS-Exported File System

The HAStoragePlus resource type performs the same functions as HAStorage, and synchronizes the startups between resource groups and disk device groups. The HAStoragePlus resource type has an additional feature to make a local file system highly available. For background information about making a local file system highly available, see Enabling Highly Available Local File Systems. To use both of these features, set up the HAStoragePlus resource type.


Note –

These instructions explain how to use the HAStoragePlus resource type with the UNIX file system. For information about using the HAStoragePlus resource type with the Sun StorEdgeTM QFS file system, see your Sun StorEdge QFS documentation.


The following example uses a simple NFS service that exports home directory data from a locally mounted directory /global/local-fs/nfs/export/ home. The example assumes the following:

Steps
  1. Become superuser on a cluster member.

  2. Determine whether the HAStoragePlus resource type and the SUNW.nfs resource type are registered.

    The following command prints a list of registered resource types.


    # scrgadm -p | egrep Type
    
  3. If necessary, register the HAStoragePlus resource type and the SUNW.nfs resource type.


    # scrgadm -a -t SUNW.HAStoragePlus
    # scrgadm -a -t SUNW.nfs
    
  4. Create the failover resource group nfs-rg.


    # scrgadm -a -g nfs-rg -y PathPrefix=/global/local-fs/nfs
    
  5. Create a logical host resource of type SUNW.LogicalHostname.


    # scrgadm -a -j nfs-lh-rs -g nfs-rg -L -l log-nfs
    
  6. Create the resource nfs-hastp-rs of type HAStoragePlus.


    # scrgadm -a -j nfs-hastp-rs -g nfs-rg -t SUNW.HAStoragePlus \
    -x FilesystemMountPoints=/global/local-fs/nfs \
    -x AffinityOn=True
    

    Note –

    You can use the FilesystemMountPoints extension property to specify a list of one or more mount points for file systems. This list can consist of mount points for both local file systems and global file systems. The mount at boot flag is ignored by HAStoragePlus for global file systems.


  7. Bring online the resource group nfs-rg on a cluster node.

    The node where the resource group is brought online becomes the primary node for the /global/local-fs/nfs file system's underlying global device partition. The file system /global/local-fs/nfs is then locally mounted on this node.


    # scswitch -Z -g nfs-rg
    
  8. Create the resource nfs-rs of type SUNW.nfs and specify its resource dependency on the resource nfs-hastp-rs.

    The file dfstab.nfs-rs must be present in /global/local-fs/nfs/SUNW.nfs.


    # scrgadm -a -g nfs-rg -j nfs-rs -t SUNW.nfs \
    -y Resource_dependencies=nfs-hastp-rs
    

    Note –

    Before you can set the dependency in the nfs-rs resource, the nfs-hastp-rs resource must be online.


  9. Take offline the resource group nfs-rg.


    # scswitch -F -g nfs-rg
    
  10. Bring online the nfs-rg group on a cluster node.


    # scswitch -Z -g nfs-rg
    

    Caution – Caution –

    Ensure that you switch only the resource group. Do not attempt to switch the device group. If you attempt to switch the device group, the states of the resource group and the device group become inconsistent, causing the resource group to fail over.


    Whenever the service is migrated to a new node, the primary I/O path for /global/local-fs/nfs will always be online and colocated with the NFS servers. The file system /global/local-fs/nfs is locally mounted before the NFS server is started.