Sun Cluster Data Services Planning and Administration Guide for Solaris OS

ProcedureHow to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS Highly Available

You perform the following primary tasks to make a local Solaris ZFS (Zettabyte File System) highly available:

This section describes how to complete both tasks.

  1. Create a ZFS storage pool.


    Caution – Caution –

    Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in a Sun Cluster configuration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Solaris device identifier that you specify to the zpool command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note –

    The zpool can be created using a full disk or a disk slice. It is preferred to create a zpool using a full disk by specifying a Solaris logical device as ZFS performs better by enabling the disk write cache. ZFS labels the disk with EFI when a full disk is provided.


    See Creating a ZFS Storage Pool in Solaris ZFS Administration Guide for information about how to create a ZFS storage pool.

  2. In the ZFS storage pool that you just created, create a ZFS.

    You can create more than one ZFS in the same ZFS storage pool.


    Note –

    HAStoragePlus does not support file systems created on ZFS volumes.

    Do not set the ZFS mount point property to legacy or to none. You cannot use SUNW.HAStoragePlus to manage a ZFS storage pool that contains a file system for which the ZFS mount point property is set to either one of these values.

    Do not place a ZFS in the FilesystemMountPoints extension property.


    See Creating a ZFS File System Hierarchy in Solaris ZFS Administration Guide for information about how to create a ZFS in a ZFS storage pool.

  3. On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.

  4. Create a failover resource group.


    # clresourcegroup create resource-group
    
  5. Register the HAStoragePlus resource type.


    # clresourcetype register SUNW.HAStoragePlus
    
  6. Create a HAStoragePlus resource for the local ZFS.


    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools="zpool" resource
    

    The resource is created in the enabled state.

  7. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.


    # clresourcegroup online -M resource-group
    

Example 2–35 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS Highly Available

The following example shows the commands to make a local ZFS highly available.


phys-schost-1% su
Password: 
# cldevice list -v

DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0
you can create a zpool using a disk slice by specifying a Solaris device 
identifier:
# zpool create HAzpool c1t8d0s2
or or you can create a zpool using disk slice by specifying a logical device 
identifier
# zpool create HAzpool /dev/did/dsk/d3s2
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clresourcegroup create hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus \
                    -p Zpools=HAzpool hasp-rs
# clresourcegroup online -M hasp-rg