Go to main content

Planning and Administering Data Services for Oracle® Solaris Cluster 4.4

Exit Print View

Updated: November 2019
 
 

How to Set Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available

Perform the following primary tasks to make a local ZFS file system highly available:

  • Create a ZFS storage pool.

  • Create a ZFS file system in that ZFS storage pool.

  • Set up the HAStoragePlus resource that manages the ZFS storage pool.

This section describes how to complete these tasks.


Caution

Caution  -  If you are planning to manually import a ZFS pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems. For more information, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource.



Note -  You can also use the Oracle Solaris Cluster Manager browser interface to create in one operation an HAStoragePlus resource for a ZFS file system and a new resource group to contain it. For Oracle Solaris Cluster Manager log-in instructions, see How to Access Oracle Solaris Cluster Manager in Administering an Oracle Solaris Cluster 4.4 Configuration. After you log in, click Tasks and then click Highly Available Storage to start the wizard.

This wizard requires that all cluster nodes have the same root password.


  1. Create a ZFS storage pool.

    Caution

    Caution  -  Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Cluster configuration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Oracle Solaris device identifier that you specify to the zpool(8) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note -  The ZFS storage pool can be created using a full disk or a disk slice.
    • For best performance, create a ZFS storage pool by using a full disk. Specifying an Oracle Solaris logical device as a ZFS file system performs better by enabling the disk write cache. The ZFS file system labels the disk with EFI when a full disk is provided.

    • If you are creating a zpool on a did device, you must specify a slice. Do not use /dev/did/dsk/dN, because that can corrupt the disk label.


    See Creating ZFS Storage Pools in Managing ZFS File Systems in Oracle Solaris 11.4 for information about how to create a ZFS storage pool.

  2. In the ZFS storage pool that you just created, create a ZFS file system.

    Observe the following requirements when you create a ZFS file system in the ZFS pool:

    • You can create more than one ZFS file system in the same ZFS storage pool.

    • HAStoragePlus does not support file systems created on ZFS file system volumes.

    • Do not place a ZFS file system in the FilesystemMountPoints extension property.

    • If necessary, change the ZFS failmode property setting to either continue or panic, whichever best fits your requirements.


      Note -  The ZFS pool failmode property is set to wait by default. This setting can result in HAStoragePlus resource blocking, which might prevent a failover of the resource group. The recommended zpool setting is failmode=continue. In the HAStoragePlus resource that is managing this zpool, set the RebootOnFailure property to TRUE. Alternatively, the zpool failmode=panic can also guarantee a panic, a crash dump, and a failover on the loss of the storage. The failmode=panic setting works regardless of the setting of the RebootOnFailure property. However, setting RebootOnFailure=TRUE can be more responsive because its monitor can detect the storage outage sooner.
    • You can choose to encrypt a ZFS file system when you create it. TheHAStoragePlus resource automatically mounts all the file systems in the pool during resource online. The encrypted file system that requires interactive entry of a key or a passphrase during mount will experience a problem bringing the resource online. To avoid problems, do not use keysource=raw | hex | passphrase,prompt|pkcs11: for the encrypted file systems of the ZFS storage pool managed by a cluster using an HAStoragePlus resource. You can use keysource=raw | hex | passphrase,file://|https://, where the key or a passphrase location is accessible to the cluster nodes where the HAStoragePlus resource is going online.

    See Creating ZFS Storage Pools in Managing ZFS File Systems in Oracle Solaris 11.4 for information about how to create a ZFS file system in a ZFS storage pool.

  3. On any node in the cluster, assume the root role that provides solaris.cluster.modify RBAC authorization.
  4. Create a failover resource group.
    # clresourcegroup create resource-group
  5. Register the HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  6. Create an HAStoragePlus resource for the local ZFS file system.
    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \
    resource

    The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.

    The resource is created in the enabled state.

  7. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
    # clresourcegroup online -M resource-group
Example 51  Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Global Cluster

The following example shows the commands to make a local ZFS file system highly available.

phys-schost-1% su
Password:
# cldevice list -v

DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0
You can create a ZFS storage pool using a disk slice by specifying a Solaris device
identifier:
# zpool create HAzpool c1t8d0s2
or you can create a ZFS storage pool using disk slice by specifying a logical device
identifier
# zpool create HAzpool /dev/did/dsk/d3s2
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clresourcegroup create hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs
# clresourcegroup online -M hasp-rg
Example 52  Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Zone Cluster

The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.

phys-schost-1# cldevice list -v
# zpool create HAzpool c1t8d0
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:fs> set name=HAzpool
clzc:sczone:fs> end
clzc:sczone:fs> exit
# clresourcegroup create -Z sczone hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -g hasp-rg -t SUNW.HAStoragePlus \
-p Zpools=HAzpool hasp-rs
# clresourcegroup online -Z -sczone -M hasp-rg