Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Enabling Highly Available Local File Systems

The HAStoragePlus resource type can be used to make a local file system highly available within a Sun Cluster environment. The local file system partitions must reside on global disk groups with affinity switchovers enabled and the Sun Cluster environment must be configured for failover. This enables the user to make any file system on multi-host disks accessible from any host directly connected to those multi-host disks. (You cannot use HAStoragePlus to make a root file system highly available.) The failback settings must be identical for both the resource group and device group(s).

Using a highly available local file system is strongly recommended for some I/O intensive data services, and a procedure on how to configure the HAStoragePlus resource type has been added to the Registration and Configuration procedures for these data services. For procedures on how to set up the HAStoragePlus resource type for these data services, see the following sections.

For the procedure to set up HAStoragePlus resource type for other data services, see How to Set Up HAStoragePlus Resource Type.

How to Set Up HAStoragePlus Resource Type

The HAStoragePlus resource type was introduced in Sun Cluster 3.0 5/02. This new resource type performs the same functions as HAStorage, and synchronizes the startups between resource groups and disk device groups. The HAStoragePlus resource type has an additional feature to make a local file system highly available. (For background information on making a local file system highly available, see Enabling Highly Available Local File Systems.) To use both of these features, set up the HAStoragePlus resource type.

To set up HAStoragePlus, the local file system partitions must reside on global disk groups with affinity switchovers enabled and the Sun Cluster environment must be configured for failover.

The following example uses a simple NFS service that shares out home directory data from a locally mounted directory /global/local-fs/nfs/export/ home. The example assumes the following:

  1. Become superuser on a cluster member.

  2. Determine whether the resource type is registered.

    The following command prints a list of registered resource types.


    # scrgadm -p | egrep Type
    
  3. If you need to, register the resource type.


    # scrgadm -a -t SUNW.nfs
    

  4. Create the failover resource group nfs-r


    # scrgadm -a -g nfs-rg -y PathPrefix=/global/local-fs/nfs
    

  5. Create a logical host resource of type SUNW.LogicalHostname.


    # scrgadm -a -j nfs-lh-rs -g nfs-rg -L -l log-nfs
    

  6. Register the HAStoragePlus resource type with the cluster.


    # scrgadm -a -t SUNW.HAStoragePlus
    

  7. Create the resource nfs-hastp-rs of type HAStoragePlus.


    # scrgadm -a -j nfs-hastp-rs -g nfs-rg -t SUNW.HAStoragePlus\
    -x FilesystemMountPoints=/global/local-fs/nfs \
    -x AffinityOn=TRUE
    


    Note –

    The FilesystemMountPoints extension property can be used to specify a list of one or more file system mount points. This list can consist of both local and global file system mount points. The mount at boot flag is ignored by HAStoragePlus for global file systems.


  8. Bring the resource group nfs-rg online on a cluster node.

    This node will become the primary node for the /global/local-fs/nfs file system's underlying global device partition. The file system /global/local-fs/nfs will then be locally mounted on this node


    # scswitch -Z -g nfs-rg
    
  9. Register the SUNW.nfs resource type with the cluster. Create the resource nfs-rs of type SUNW.nfs and specify its resource dependency on the resource nfs-hastp-rs.

    dfstab.nfs-rs will be present in /global/local-fs/nfs/SUNW.nfs.


    # scrgadm -a -t SUNW.nfs
    # scrgadm -a -g nfs-rg -j nfs-rs -t SUNW.nfs \
    -y Resource_dependencies=nfs-hastp-rs
    


    Note –

    The nfs-hastp-rs resource must be online before you can set the dependency in the nfs resource.


  10. Bring the resource nfs-rs online.


    # scswitch -Z -g nfs-rg
    

Caution – Caution –

Be sure to switch only at the resource group level. Switching at the device group level will confuse the resource group causing it to failover.


Now whenever the service is migrated to a new node, the primary I/O path for /global/local-fs/nfs will always be online and collocated with the NFS servers. The file system /global/local-fs/nfs will be locally mounted before starting the NFS server.