Perform this task to create an HAStoragePlus resource that uses a ZFS storage pool for a cluster file system.
To create a cluster file system with HAStoragePlus that uses a UFS file system, instead go to How to Set Up an HAStoragePlus Resource for Cluster File Systems Using a UFS File System.
# zpool create mypool /dev/dsk/c0t0d0
Caution - Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.
Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Cluster configuration:
Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.
Ensure that the Oracle Solaris device identifier that you specify to the zpool(8) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.
For best performance, create a ZFS storage pool by using a full disk. Specifying an Oracle Solaris logical device as a ZFS file system performs better by enabling the disk write cache. The ZFS file system labels the disk with EFI when a full disk is provided.
If you are creating a zpool on a did device, you must specify a slice. Do not use /dev/did/dsk/dN, because that can corrupt the disk label.
See Creating ZFS Storage Pools in Managing ZFS File Systems in Oracle Solaris 11.4 for information about how to create a ZFS storage pool.
# zpool export mypool
Perform the following step to create a failover group.
# clresourcegroup create mypool-rg
Perform the following step to create a scalable group.
# clresourcegroup create -S mypool-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g mypool-rg -t SUNW.HAStoragePlus \ -p GlobalZpools=mypool hasp-resource
# cldevicegroup create [-p poolaccess=global] \ [-p searchpaths=/dev/did/dsk] [-p readonly=false]\ [-n node-list] -t zpool mypool # clresource create -g mypool-rg -t SUNW.HAStoragePlus \ -p GlobalZpools=mypool hasp-resource
The resource is created in the enabled state.
# clresource set -p Resource_dependencies_offline_restart=\ hasp-resource application-resource
# clresourcegroup online -M mypool-rg
You now have the option of creating additional ZFS file systems in the storage pool. All of these file systems will be mounted globally as cluster file systems. Follow Step 9 and Step 10 for creating additional ZFS file systems in the storage pool.
Observe the following requirements when you create a ZFS file system in the ZFS pool:
You can create more than one ZFS file system in the same ZFS storage pool.
HAStoragePlus does not support file systems created on ZFS file system volumes.
Do not place a ZFS file system in the FilesystemMountPoints extension property.
If necessary, change the ZFS failmode property setting to either continue or panic, whichever best fits your requirements.
You can choose to encrypt a ZFS file system when you create it. The HAStoragePlus resource automatically mounts all the file systems in the pool during resource online. The encrypted file system that requires interactive entry of a key or a passphrase during mount will experience a problem bringing the resource online.
To avoid problems, do not use keysource=raw | hex | passphrase,prompt|pkcs11: for the encrypted file systems of the ZFS storage pool managed by a cluster using an HAStoragePlus resource. You can use keysource=raw | hex | passphrase,file://|https://, where the key or a passphrase location is accessible to the cluster nodes where the HAStoragePlus resource is going online.
# cldg status mypool
Identify the node on which the device group is primary.
# zfs create mypool/filesystem1
You can verify that the file system has been globally mounted by executing the following command on each cluster node.
# df -h /mypool/filesystem1
This example shows how to configure an HAStoragePlus resource to manage a ZFS storage pool, mypool, in a global cluster. The file system datasets of mypool will be mounted under /mypool and will be available on all the global-cluster nodes.
The following commands are executed in the global zone:
phys-schost-1# zpool create mypool /dev/dsk/c1t0d5 phys-schost-1# zfs create mypool/fs1 phys-schost-1# zpool export mypool phys-schost-1# clresourcegroup create hasp-rg phys-schost-1# clresourcetype register SUNW.HAStoragePlus phys-schost-1# clresource create -g hasp-rg -t SUNW.HAStoragePlus \ -p GlobalZpools=mypool hasp-rs phys-schost-1# clresourcegroup online -M hasp-rg
For more information about virtual devices, see the zpool(8) man page.Example 46 Setting up an HAStoragePlus Resource with a ZFS-based Cluster File System in a Zone Cluster
This example shows how to configure an HAStoragePlus resource with a ZFS-based cluster file system /mypool/fs1 in a zone cluster, sczone. In this example, the file system will be made available on all the zone clusters nodes by using a scalable resource group in the zone cluster. The cluster file system is made available for the zone cluster nodes on the mount point /global/fs1.
This example configuration uses a cluster file system /mypool/fs1 mounted in the global cluster, which is then loopback-mounted onto the zone-cluster nodes where the resource group is online.
The following commands are executed in the global zone:
phys-schost-1# zpool create mypool /dev/dsk/c1t0d5 phys-schost-1# zfs create mypool/fs1 phys-schost-1# clresourcegroup create -S hasp-rg phys-schost-1# clresourcetype register SUNW.HAStoragePlus phys-schost-1# clresource create -g hasp-rg -t SUNW.HAStoragePlus \ -p GlobalZpools=mypool hasp-rs phys-schost-1# clresourcegroup online -M hasp-rg phys-schost-1# clzonecluster configure sczone clzc:sczone> add fs clzc:sczone:fs> set dir=/global/fs1 clzc:sczone:fs> set special=/mypool/fs1 clzc:sczone:fs> set type=lofs clzc:sczone:fs> end clzc:sczone:fs> exit phys-schost-1# clresourcegroup create -S -Z sczone \ -p RG_affinities=++global:hasp-rg \ zc-hasp-rg phys-schost-1# clresourcetype register -Z sczone SUNW.HAStoragePlus phys-schost-1# clresource create -Z sczone -g zc-hasp-rg -t SUNW.HAStoragePlus \ -p FileSystemMountPoints=/global/fs1 \ -p resource_dependencies_offline_restart=global:hasp-rs zc-hasp-rs phys-schost-1# clresourcegroup online -Z sczone -M zc-hasp-rg
For more information about virtual devices, see the zpool(8) man page.