Go to main content

Planning and Administering Data Services for Oracle® Solaris Cluster 4.4

Exit Print View

Updated: May 2019
 
 

How to Set Up an HAStoragePlus Resource for a zpool for Globally Mounted ZFS File Systems

Perform this task to create an HAStoragePlus resource that uses a ZFS storage pool for a cluster file system.

To create a cluster file system with HAStoragePlus that uses a UFS file system, instead go to How to Set Up an HAStoragePlus Resource for Cluster File Systems Using a UFS File System.

  1. Create a ZFS storage pool.
    # zpool create mypool /dev/dsk/c0t0d0

    Caution

    Caution  -  Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Cluster configuration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Oracle Solaris device identifier that you specify to the zpool(8) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note -  The ZFS storage pool can be created using a full disk or a disk slice.
    • For best performance, create a ZFS storage pool by using a full disk. Specifying an Oracle Solaris logical device as a ZFS file system performs better by enabling the disk write cache. The ZFS file system labels the disk with EFI when a full disk is provided.

    • If you are creating a zpool on a did device, you must specify a slice. Do not use /dev/did/dsk/dN, because that can corrupt the disk label.


    See Creating ZFS Storage Pools in Managing ZFS File Systems in Oracle Solaris 11.4 for information about how to create a ZFS storage pool.

  2. Export the newly created ZFS zpool.
    # zpool export mypool
  3. On any node in the cluster, assume the root role or a role that provides solaris.cluster.modify authorization.
  4. Create a failover or scalable resource group as desired.
    • Perform the following step to create a failover group.

      # clresourcegroup create mypool-rg
    • Perform the following step to create a scalable group.

      # clresourcegroup create -S mypool-rg
  5. Register the SUNW.HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  6. Create an HAStoragePlus resource for the ZFS file system.
    • The default location to search for devices of ZFS storage pools is /dev/dsk. HAStoragePlus will automatically create the required ZFS pool device groups with the poolaccess property set to global.
      # clresource create -g mypool-rg -t SUNW.HAStoragePlus \
      -p GlobalZpools=mypool hasp-resource
    • If the default location to search for devices of ZFS storage pools is anything other than /dev/dsk, you must manually create the ZFS pool device group first and then create an HAStoragePlus resource for each of the ZFS pools. For example:
      # cldevicegroup create [-p poolaccess=global] \
      [-p searchpaths=/dev/did/dsk] [-p readonly=false]\
      [-n node-list] -t zpool mypool
      
      # clresource create -g mypool-rg -t SUNW.HAStoragePlus \
      -p GlobalZpools=mypool hasp-resource 
      

    The resource is created in the enabled state.

  7. Set the dependency of data service resources on hasp-resource.
    # clresource set -p  Resource_dependencies_offline_restart=\
    hasp-resource application-resource
  8. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
    # clresourcegroup online -M mypool-rg

    You now have the option of creating additional ZFS file systems in the storage pool. All of these file systems will be mounted globally as cluster file systems. Follow Step 9 and Step 10 for creating additional ZFS file systems in the storage pool.

    Observe the following requirements when you create a ZFS file system in the ZFS pool:

    • You can create more than one ZFS file system in the same ZFS storage pool.

    • HAStoragePlus does not support file systems created on ZFS file system volumes.

    • Do not place a ZFS file system in the FilesystemMountPoints extension property.

    • If necessary, change the ZFS failmode property setting to either continue or panic, whichever best fits your requirements.


      Note -  The ZFS pool failmode property is set to wait by default. This setting can result in HAStoragePlus resource blocking, which might prevent a failover of the resource group. The recommended zpool setting is failmode=continue. In the HAStoragePlus resource that is managing this zpool, set the RebootOnFailure property to TRUE. Alternatively, the zpool failmode=panic can also guarantee a panic, a crash dump, and a failover on the loss of the storage. The failmode=panic setting works regardless of the setting of the RebootOnFailure property. However, setting RebootOnFailure=TRUE can be more responsive because its monitor can detect the storage outage sooner.
    • You can choose to encrypt a ZFS file system when you create it. The HAStoragePlus resource automatically mounts all the file systems in the pool during resource online. The encrypted file system that requires interactive entry of a key or a passphrase during mount will experience a problem bringing the resource online.

      To avoid problems, do not use keysource=raw | hex | passphrase,prompt|pkcs11: for the encrypted file systems of the ZFS storage pool managed by a cluster using an HAStoragePlus resource. You can use keysource=raw | hex | passphrase,file://|https://, where the key or a passphrase location is accessible to the cluster nodes where the HAStoragePlus resource is going online.

  9. (Optional) Create an additional file system by executing the zfs create command on the node that has the zpool imported.
    1. Locate the primary node so that you can run the zfs command to create an additional file system.
      # cldg status mypool

      Identify the node on which the device group is primary.

    2. Create an additional file system by executing the zfs create command on the device group's primary node.
      # zfs create mypool/filesystem1

      You can verify that the file system has been globally mounted by executing the following command on each cluster node.

      # df -h /mypool/filesystem1
Example 45  Setting up an HAStoragePlus Resource with a ZFS Pool Containing a Cluster File System in a Global Cluster

This example shows how to configure an HAStoragePlus resource to manage a ZFS storage pool, mypool, in a global cluster. The file system datasets of mypool will be mounted under /mypool and will be available on all the global-cluster nodes.

The following commands are executed in the global zone:

phys-schost-1# zpool create mypool /dev/dsk/c1t0d5
phys-schost-1# zfs create mypool/fs1
phys-schost-1# zpool export mypool 
phys-schost-1# clresourcegroup create hasp-rg
phys-schost-1# clresourcetype register SUNW.HAStoragePlus
phys-schost-1# clresource create -g hasp-rg -t SUNW.HAStoragePlus \
-p GlobalZpools=mypool hasp-rs
phys-schost-1# clresourcegroup online -M hasp-rg

For more information about virtual devices, see the zpool(8) man page.

Example 46  Setting up an HAStoragePlus Resource with a ZFS-based Cluster File System in a Zone Cluster

This example shows how to configure an HAStoragePlus resource with a ZFS-based cluster file system /mypool/fs1 in a zone cluster, sczone. In this example, the file system will be made available on all the zone clusters nodes by using a scalable resource group in the zone cluster. The cluster file system is made available for the zone cluster nodes on the mount point /global/fs1.

This example configuration uses a cluster file system /mypool/fs1 mounted in the global cluster, which is then loopback-mounted onto the zone-cluster nodes where the resource group is online.

The following commands are executed in the global zone:

phys-schost-1# zpool create mypool /dev/dsk/c1t0d5
phys-schost-1# zfs create mypool/fs1
phys-schost-1# clresourcegroup create -S hasp-rg
phys-schost-1# clresourcetype register SUNW.HAStoragePlus
phys-schost-1# clresource create -g hasp-rg -t SUNW.HAStoragePlus \
-p GlobalZpools=mypool hasp-rs
phys-schost-1# clresourcegroup online -M hasp-rg

phys-schost-1# clzonecluster configure sczone
clzc:sczone> add fs
clzc:sczone:fs> set dir=/global/fs1
clzc:sczone:fs> set special=/mypool/fs1
clzc:sczone:fs> set type=lofs
clzc:sczone:fs> end
clzc:sczone:fs> exit

phys-schost-1# clresourcegroup create -S -Z sczone \
-p RG_affinities=++global:hasp-rg \
zc-hasp-rg
phys-schost-1# clresourcetype register -Z sczone SUNW.HAStoragePlus
phys-schost-1# clresource create -Z sczone -g zc-hasp-rg -t SUNW.HAStoragePlus \
-p FileSystemMountPoints=/global/fs1 \
-p resource_dependencies_offline_restart=global:hasp-rs zc-hasp-rs
phys-schost-1# clresourcegroup online -Z sczone -M zc-hasp-rg

For more information about virtual devices, see the zpool(8) man page.