You perform the following primary tasks to make a local Solaris ZFS (Zettabyte File System) highly available:
Create a ZFS storage pool.
Create a ZFS file system in that ZFS storage pool.
Set up the HAStoragePlus resource that manages the ZFS storage pool.
This section describes how to complete both tasks.
Create a ZFS storage pool.
Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. Once a disk is in a storage pool, you can configure that disk as a quorum device. Or, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.
Observe the following requirements when you create a ZFS storage pool in a Sun Cluster configuration:
Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.
Ensure that the Solaris device identifier that you specify to the zpool(1M) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.
The ZFS storage pool can be created using a full disk or a disk slice. It is preferred to create a ZFS storage pool using a full disk by specifying a Solaris logical device as ZFS file system performs better by enabling the disk write cache. ZFS file system labels the disk with EFI when a full disk is provided.
See Creating a ZFS Storage Pool in Solaris ZFS Administration Guide for information about how to create a ZFS storage pool.
In the ZFS storage pool that you just created, create a ZFS file system.
You can create more than one ZFS file system in the same ZFS storage pool.
HAStoragePlus does not support file systems created on ZFS file system volumes.
Do not place a ZFS file system in the FilesystemMountPoints extension property.
See Creating a ZFS File System Hierarchy in Solaris ZFS Administration Guide for information about how to create a ZFS file system in a ZFS storage pool.
On any node in the cluster, become superuser or assume a role that provides solaris.cluster.modify RBAC authorization.
Create a failover resource group.
# clresourcegroup create resource-group
Register the HAStoragePlus resource type.
# clresourcetype register SUNW.HAStoragePlus
Create a HAStoragePlus resource for the local ZFS file system.
# clresource create -g resource-group -t SUNW.HAStoragePlus \ -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \ resource
The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.
The resource is created in the enabled state.
Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
# clresourcegroup online -M resource-group
The following example shows the commands to make a local ZFS file system highly available.
phys-schost-1% su Password: # cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t1d0 d3 phys-schost-1:/dev/rdsk/c1t8d0 d3 phys-schost-2:/dev/rdsk/c1t8d0 d4 phys-schost-1:/dev/rdsk/c1t9d0 d4 phys-schost-2:/dev/rdsk/c1t9d0 d5 phys-schost-1:/dev/rdsk/c1t10d0 d5 phys-schost-2:/dev/rdsk/c1t10d0 d6 phys-schost-1:/dev/rdsk/c1t11d0 d6 phys-schost-2:/dev/rdsk/c1t11d0 d7 phys-schost-2:/dev/rdsk/c0t0d0 d8 phys-schost-2:/dev/rdsk/c0t1d0 you can create a ZFS storage pool using a disk slice by specifying a Solaris device identifier: # zpool create HAzpool c1t8d0s2 or you can create a ZFS storage pool using disk slice by specifying a logical device identifier # zpool create HAzpool /dev/did/dsk/d3s2 # zfs create HAzpool/export # zfs create HAzpool/export/home # clresourcegroup create hasp-rg # clresourcetype register SUNW.HAStoragePlus # clresource create -g hasp-rg -t SUNW.HAStoragePlus \ -p Zpools=HAzpool hasp-rs # clresourcegroup online -M hasp-rg
The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.
phys-schost-1# cldevice list -v # zpool create HAzpool c1t8d0 # zfs create HAzpool/export # zfs create HAzpool/export/home # clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:fs> set name=HAzpool clzc:sczone:fs> end clzc:sczone:fs> exit # clresourcegroup create -Z sczone hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # clresource create -Z sczone -g hasp-rg \ -t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs # clresourcegroup online -Z -sczone -M hasp-rg