Perform the following primary tasks to make a local Solaris ZFS highly available:
Create a ZFS storage pool.
Create a ZFS file system in that ZFS storage pool.
Set up the HAStoragePlus resource that manages the ZFS storage pool.
This section describes how to complete these tasks.
Caution - If you are planning to manually import a ZFS pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems. For more information, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource. |
Caution - Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device. |
Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Cluster configuration:
Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.
Ensure that the Oracle Solaris device identifier that you specify to the zpool(1M) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.
See Creating a Basic ZFS Storage Pool in Managing ZFS File Systems in Oracle Solaris 11.2 for information about how to create a ZFS storage pool.
Observe the following requirements when you create a ZFS file system in the ZFS pool:
You can create more than one ZFS file system in the same ZFS storage pool.
HAStoragePlus does not support file systems created on ZFS file system volumes.
Do not place a ZFS file system in the FilesystemMountPoints extension property.
If necessary, change the ZFS failmode property setting to either continue or panic, whichever best fits your requirements.
You can choose to encrypt a ZFS file system when you create it. TheHAStoragePlus resource automatically mounts all the file systems in the pool during resource online. The encrypted file system that requires interactive entry of a key or a passphrase during mount will experience a problem bringing the resource online. To avoid problems, do not use keysource=raw | hex | passphrase,prompt|pkcs11: for the encrypted file systems of the ZFS storage pool managed by a cluster using an HAStoragePlus resource. You can use keysource=raw | hex | passphrase,file://|https://, where the key or a passphrase location is accessible to the cluster nodes where the HAStoragePlus resource is going online.
See Creating a ZFS File System Hierarchy in Managing ZFS File Systems in Oracle Solaris 11.2 for information about how to create a ZFS file system in a ZFS storage pool.
# clresourcegroup create resource-group
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g resource-group -t SUNW.HAStoragePlus \ -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \ resource
The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.
The resource is created in the enabled state.
# clresourcegroup online -M resource-group
The following example shows the commands to make a local ZFS file system highly available.
phys-schost-1% su Password: # cldevice list -v DID Device Full Device Path ---------- ---------------- d1 phys-schost-1:/dev/rdsk/c0t0d0 d2 phys-schost-1:/dev/rdsk/c0t1d0 d3 phys-schost-1:/dev/rdsk/c1t8d0 d3 phys-schost-2:/dev/rdsk/c1t8d0 d4 phys-schost-1:/dev/rdsk/c1t9d0 d4 phys-schost-2:/dev/rdsk/c1t9d0 d5 phys-schost-1:/dev/rdsk/c1t10d0 d5 phys-schost-2:/dev/rdsk/c1t10d0 d6 phys-schost-1:/dev/rdsk/c1t11d0 d6 phys-schost-2:/dev/rdsk/c1t11d0 d7 phys-schost-2:/dev/rdsk/c0t0d0 d8 phys-schost-2:/dev/rdsk/c0t1d0Example 2-41 Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Zone Clusteryou can create a ZFS storage pool using a disk slice by specifying a Solaris device identifier: # zpool create HAzpool c1t8d0s2or you can create a ZFS storage pool using disk slice by specifying a logical device identifier # zpool create HAzpool /dev/did/dsk/d3s2 # zfs create HAzpool/export # zfs create HAzpool/export/home # clresourcegroup create hasp-rg # clresourcetype register SUNW.HAStoragePlus # clresource create -g hasp-rg -t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs # clresourcegroup online -M hasp-rg
The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.
phys-schost-1# cldevice list -v # zpool create HAzpool c1t8d0 # zfs create HAzpool/export # zfs create HAzpool/export/home # clzonecluster configure sczone clzc:sczone> add dataset clzc:sczone:fs> set name=HAzpool clzc:sczone:fs> end clzc:sczone:fs> exit # clresourcegroup create -Z sczone hasp-rg # clresourcetype register -Z sczone SUNW.HAStoragePlus # clresource create -Z sczone -g hasp-rg -t SUNW.HAStoragePlus \ -p Zpools=HAzpool hasp-rs # clresourcegroup online -Z -sczone -M hasp-rg