Oracle® Solaris Cluster Data Services Planning and Administration Guide

Exit Print View

Updated: September 2014, E39648–02
 
 

How to Set Up the HAStoragePlus Resource Type to Make a Local Solaris ZFS File System Highly Available

Perform the following primary tasks to make a local Solaris ZFS highly available:

  • Create a ZFS storage pool.

  • Create a ZFS file system in that ZFS storage pool.

  • Set up the HAStoragePlus resource that manages the ZFS storage pool.

This section describes how to complete these tasks.


Caution

Caution  -  If you are planning to manually import a ZFS pool that is already managed by the cluster, ensure that the pool is not imported on multiple nodes. Importing a pool on multiple nodes can lead to problems. For more information, see Changing a ZFS Pool Configuration That is Managed by an HAStoragePlus Resource.


  1. Create a ZFS storage pool.

    Caution

    Caution  -  Do not add a configured quorum device to a ZFS storage pool. When a configured quorum device is added to a storage pool, the disk is relabeled as an EFI disk, the quorum configuration information is lost, and the disk no longer provides a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device. Alternatively, you can unconfigure the disk, add it to the storage pool, then reconfigure the disk as a quorum device.


    Observe the following requirements when you create a ZFS storage pool in an Oracle Solaris Cluster configuration:

    • Ensure that all of the devices from which you create a ZFS storage pool are accessible from all nodes in the cluster. These nodes must be configured in the node list of the resource group to which the HAStoragePlus resource belongs.

    • Ensure that the Oracle Solaris device identifier that you specify to the zpool(1M) command, for example /dev/dsk/c0t0d0, is visible to the cldevice list -v command.


    Note -  The ZFS storage pool can be created using a full disk or a disk slice. It is preferred to create a ZFS storage pool using a full disk by specifying an Oracle Solaris logical device as ZFS file system performs better by enabling the disk write cache. ZFS file system labels the disk with EFI when a full disk is provided. If you are creating a zpool on a did device, you must specify a slice. Do not use /dev/did/dn because that can corrupt the disk label.

    See Creating a Basic ZFS Storage Pool in Managing ZFS File Systems in Oracle Solaris 11.2 for information about how to create a ZFS storage pool.

  2. In the ZFS storage pool that you just created, create a ZFS file system.

    Observe the following requirements when you create a ZFS file system in the ZFS pool:

    • You can create more than one ZFS file system in the same ZFS storage pool.

    • HAStoragePlus does not support file systems created on ZFS file system volumes.

    • Do not place a ZFS file system in the FilesystemMountPoints extension property.

    • If necessary, change the ZFS failmode property setting to either continue or panic, whichever best fits your requirements.


      Note -  The ZFS pool failmode property is set to wait by default. This setting can result in HAStoragePlus resource blocking, which might prevent a failover of the resource group. The recommended zpool setting is failmode=continue. In the HAStoragePlus resource that is managing this zpool, set the reboot_on_failure property to TRUE. Alternatively, the zpool failmode=panic can also guarantee a panic, a crash dump, and a failover on the loss of the storage. The failmode=panic setting works regardless of the setting of the reboot_on_failure property. However, setting reboot_on_failure=TRUE can be more responsive because its monitor can detect the storage outage sooner.
    • You can choose to encrypt a ZFS file system when you create it. TheHAStoragePlus resource automatically mounts all the file systems in the pool during resource online. The encrypted file system that requires interactive entry of a key or a passphrase during mount will experience a problem bringing the resource online. To avoid problems, do not use keysource=raw | hex | passphrase,prompt|pkcs11: for the encrypted file systems of the ZFS storage pool managed by a cluster using an HAStoragePlus resource. You can use keysource=raw | hex | passphrase,file://|https://, where the key or a passphrase location is accessible to the cluster nodes where the HAStoragePlus resource is going online.

    See Creating a ZFS File System Hierarchy in Managing ZFS File Systems in Oracle Solaris 11.2 for information about how to create a ZFS file system in a ZFS storage pool.

  3. On any node in the cluster, assume the root role that provides solaris.cluster.modify RBAC authorization.
  4. Create a failover resource group.
    # clresourcegroup create resource-group
  5. Register the HAStoragePlus resource type.
    # clresourcetype register SUNW.HAStoragePlus
  6. Create an HAStoragePlus resource for the local ZFS file system.
    # clresource create -g resource-group -t SUNW.HAStoragePlus \
    -p Zpools=zpool -p ZpoolsSearchDir=/dev/did/dsk \
    resource

    The default location to search for devices of ZFS storage pools is /dev/dsk. It can be overridden by using the ZpoolsSearchDir extension property.

    The resource is created in the enabled state.

  7. Bring online and in a managed state the resource group that contains the HAStoragePlus resource.
    # clresourcegroup online -M resource-group
Example 2-40  Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Global Cluster

The following example shows the commands to make a local ZFS file system highly available.

phys-schost-1% su
Password:
# cldevice list -v

DID Device          Full Device Path
----------          ----------------
d1                  phys-schost-1:/dev/rdsk/c0t0d0
d2                  phys-schost-1:/dev/rdsk/c0t1d0
d3                  phys-schost-1:/dev/rdsk/c1t8d0
d3                  phys-schost-2:/dev/rdsk/c1t8d0
d4                  phys-schost-1:/dev/rdsk/c1t9d0
d4                  phys-schost-2:/dev/rdsk/c1t9d0
d5                  phys-schost-1:/dev/rdsk/c1t10d0
d5                  phys-schost-2:/dev/rdsk/c1t10d0
d6                  phys-schost-1:/dev/rdsk/c1t11d0
d6                  phys-schost-2:/dev/rdsk/c1t11d0
d7                  phys-schost-2:/dev/rdsk/c0t0d0
d8                  phys-schost-2:/dev/rdsk/c0t1d0
you can create a ZFS storage pool using a disk slice by specifying a Solaris device
identifier:
# zpool create HAzpool c1t8d0s2
or you can create a ZFS storage pool using disk slice by specifying a logical device
identifier
# zpool create HAzpool /dev/did/dsk/d3s2
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clresourcegroup create hasp-rg
# clresourcetype register SUNW.HAStoragePlus
# clresource create -g hasp-rg -t SUNW.HAStoragePlus -p Zpools=HAzpool hasp-rs
# clresourcegroup online -M hasp-rg
Example 2-41  Setting Up the HAStoragePlus Resource Type to Make a Local ZFS File System Highly Available for a Zone Cluster

The following example shows the steps to make a local ZFS file system highly available in a zone cluster sczone.

phys-schost-1# cldevice list -v
# zpool create HAzpool c1t8d0
# zfs create HAzpool/export
# zfs create HAzpool/export/home
# clzonecluster configure sczone
clzc:sczone> add dataset
clzc:sczone:fs> set name=HAzpool
clzc:sczone:fs> end
clzc:sczone:fs> exit
# clresourcegroup create -Z sczone hasp-rg
# clresourcetype register -Z sczone SUNW.HAStoragePlus
# clresource create -Z sczone -g hasp-rg -t SUNW.HAStoragePlus \
-p Zpools=HAzpool hasp-rs
# clresourcegroup online -Z -sczone -M hasp-rg