Sun Cluster Data Services Planning and Administration Guide for Solaris OS

Enabling Highly Available Local File Systems

Using a highly available local file system improves the performance of I/O intensive data services. To make a local file system highly available in a Sun Cluster environment, use the HAStoragePlus resource type.

You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone by mounting the file system in the global zone. The resource type then performs a loopback mount in the non-global zone.


Note –

Local file systems include the Unix File System (UFS), Quick File System (QFS), Veritas File System (VxFS), and Solaris ZFS (Zettabyte File System).


The instructions for each Sun Cluster data service that is I/O intensive explain how to configure the data service to operate with the HAStoragePlus resource type. For more information, see the individual Sun Cluster data service guides.


Note –

Do not use the HAStoragePlus resource type to make a root file system highly available.


Sun Cluster provides the following tools for setting up the HAStoragePlus resource type to make local file systems highly available:

Sun Cluster Manager and the clsetup utility enable you to add resources to the resource group interactively. Configuring these resources interactively reduces the possibility for configuration errors that might result from command syntax errors or omissions. Sun Cluster Manager and the clsetup utility ensure that all required resources are created and that all required dependencies between resources are set.

Configuration Requirements for Highly Available Local File Systems

Any file system on multihost disks must be accessible from any host that is directly connected to those multihost disks. To meet this requirement, configure the highly available local file system as follows:


Note –

The use of a volume manager with the global devices for a highly available local file system is optional.


Format of Device Names for Devices Without a Volume Manager

If you are not using a volume manager, use the appropriate format for the name of the underlying storage device. The format to use depends on the type of storage device as follows:

The replaceable elements in these device names are as follows:

Sample Entries in /etc/vfstab for Highly Available Local File Systems

The following examples show entries in the /etc/vfstab file for global devices that are to be used for highly available local file systems.


Note –

Solaris ZFS (Zettabyte File System) does not use the /etc/vfstab file.



Example 2–32 Entries in /etc/vfstab for a Global Device Without a Volume Manager

This example shows entries in the /etc/vfstab file for a global device on a physical disk without a volume manager.

/dev/global/dsk/d1s0       /dev/global/rdsk/d1s0
/global/local-fs/nfs  ufs     5  no     logging


Example 2–33 Entries in /etc/vfstab for a Global Device With Solaris Volume Manager

This example shows entries in the /etc/vfstab file for a global device that uses Solaris Volume Manager.

/dev/md/kappa-1/dsk/d0   /dev/md/kappa-1/rdsk/d0
/global/local-fs/nfs ufs     5  no     logging


Example 2–34 Entries in /etc/vfstab for a Global Device With VxVM

This example shows entries in the /etc/vfstab file for a global device that uses VxVM.


/dev/vx/dsk/kappa-1/appvol    /dev/vx/rdsk/kappa-1/appvol
/global/local-fs/nfs vxfs     5 no     log

ProcedureHow to Set Up the HAStoragePlus Resource Type by Using the clsetup Utility

The following instructions explain how to how to set up the HAStoragePlus resource type by using the clsetup utility. Perform this procedure from any cluster node.

This procedure provides the long forms of the Sun Cluster maintenance commands. Most commands also have short forms. Except for the forms of the command names, the commands are identical. For a list of the commands and their short forms, see Appendix A, Sun Cluster Object-Oriented Commands.

Before You Begin

Ensure that the following prerequisites are met:

  1. Become superuser on any cluster node.

  2. Start the clsetup utility.


    # clsetup
    

    The clsetup main menu is displayed.

  3. Type the number that corresponds to the option for data services and press Return.

    The Data Services menu is displayed.

  4. Type the number that corresponds to the option for configuring the file system and press Return.

    The clsetup utility displays the list of prerequisites for performing this task.

  5. Verify that the prerequisites are met, and press Return to continue.

    The clsetup utility displays a list of the cluster nodes or zones that can master the highly available HAStoragePlus resource.

  6. Select the nodes or zones that can master the highly available HAStoragePlus resource.

    • To accept the default selection of all listed nodes in an arbitrary order, type a and press Return.

    • To select a subset of the listed nodes or zones, type a comma-separated or space-separated list of the numbers that correspond to the nodes. Then press Return.

      Ensure that the nodes are listed in the order in which the nodes are to appear in the HAStoragePlus resource group's node list. The first node in the list is the primary node of this resource group.

    • To select all nodes in a particular order, type a comma-separated or space-separated ordered list of the numbers that correspond to the nodes and press Return.

  7. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays a list of types of shared storage type where data is to be stored.

  8. Type the numbers that correspond to type of shared storage that you are using for storing the data and press Return.

    The clsetup utility displays the file system mount points that are configured in the cluster. If there are no existing mount points, the clsetup utility allows you to define a new mount point.

  9. Specify the default mount directory, the raw device path, the Global Mount option and the Check File System Periodically option and press Return.

    The clsetup utility returns you the properties of the mount point that the utility will create.

  10. To create the mount point, type d and press Return.

    The clsetup utility displays the available file system mount points.


    Note –

    You can use the c option to define another new mount point.


  11. Select the file system mount points.

    • To accept the default selection of all listed file system mount points in an arbitrary order, type a and press Return.

    • To select a subset of the listed file system mount points, type a comma-separated or space-separated list of the numbers that correspond to the file system mount points and press Return.

  12. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays the global disk sets and device groups that are configured in the cluster.

  13. Select the global device groups.

    • To accept the default selection of all listed device groups in an arbitrary order, type a and press Return.

    • To select a subset of the listed device groups, type a comma-separated or space-separated list of the numbers that correspond to the device groups and press Return.

  14. To confirm your selection of nodes, type d and press Return.

    The clsetup utility displays the names of the Sun Cluster objects that the utility will create.

  15. If you require a different name for any Sun Cluster object, change the name as follows.

    1. Type the number that corresponds to the name that you are changing and press Return.

      The clsetup utility displays a screen where you can specify the new name.

    2. At the New Value prompt, type the new name and press Return.

    The clsetup utility returns you to the list of the names of the Sun Cluster objects that the utility will create.

  16. To confirm your selection of Sun Cluster object names, type d and press Return.

    The clsetup utility displays information about the Sun Cluster configuration that the utility will create.

  17. To create the configuration, type c and Press Return.

    The clsetup utility displays a progress message to indicate that the utility is running commands to create the configuration. When configuration is complete, the clsetup utility displays the commands that the utility ran to create the configuration.

  18. (Optional) Type q and press Return repeatedly until you quit the clsetup utility.

    If you prefer, you can leave the clsetup utility running while you perform other required tasks before using the utility again. If you choose to quit clsetup, the utility recognizes your existing resource group when you restart the utility.

  19. Verify the HAStoragePlus resource has been created.

    Use the clresource(1CL) utility for this purpose. By default, the clsetup utility assigns the name node_name-rg to the resource group.


    # clresource show node_name-rg
    

ProcedureHow to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

You perform the following primary tasks to make a local Solaris ZFS (Zettabyte File System) highly available:

This section describes how to complete both tasks.

  1. Create a ZFS storage pool.


    Caution – Caution –

    Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in a Sun Cluster configuration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Solaris device identifier that you specify to the zpool command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note –

    The zpool can be created using a full disk or a disk slice. It is preferred to create a zpool using a full disk by specifying a Solaris logical device as ZFS performs better by enabling the disk write cache. ZFS labels the disk with EFI when a full disk is provided.


    See Creating a ZFS Storage Pool in Solaris ZFS Administration Guide for information about how to create a ZFS storage pool.

  2. In the ZFS storage pool that you just created, create a ZFS.

    You can create more than one ZFS in the same ZFS storage pool.


    Note –

    HAStoragePlus does not support file systems created on ZFS volumes.

    Do not set the ZFS mount point property to legacy or to none. You cannot use SUNW.HAStoragePlus to manage a ZFS storage pool that contains a file system for which the ZFS mount point property is set to either one of these values.

    Do not place a ZFS in the FilesystemMountPoints extension property.


    See Creating a ZFS File System Hierarchy in Solaris ZFS Administration Guide for information about how to create a ZFS in a ZFS storage pool.

  3. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  4. Create a failover resource group.


    # clresourcegroup create resource-group
    
  5. Register the HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  6. Create a HAStoragePlus resource for the local ZFS.


    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools="zpool" resource
    

    The resource is created in the enabled state.

  7. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.


    # clresourcegroup online -M resource-group
    

Example 2–35 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS Highly Available

The following example shows the commands to make a local ZFS highly available.


phys-schost-1% su
Password: 
# cldevice list -v

DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0
you can create a zpool using a disk slice by specifying a Solaris device 
identifier:
# zpool create HAzpool c1t8d0s2
or or you can create a zpool using disk slice by specifying a logical device 
identifier
# zpool create HAzpool /dev/did/dsk/d3s2
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clresourcegroup create hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus \
                    -p Zpools=HAzpool hasp-rs
# clresourcegroup online -M hasp-rg

ProcedureHow to Delete a HAStoragePlus Resource That Makes a Local Solaris ZFS Highly Available

    Disable and delete the HAStoragePlus resource that makes a local Solaris ZFS (Zettabyte File System) highly available.


    # clresource delete -F -g resource-group -t SUNW.HAStoragePlus resource