Go to main content

Reference for Oracle Solaris Cluster 4.4

Exit Print View

Updated: August 2018
 
 

SUNW.HAStoragePlus (7)

Name

SUNW.HAStoragePlus - resource type that enforces dependencies between Oracle Solaris Cluster device services, file systems, and data services and monitors those entities

Description

SUNW.HAStoragePlus describes a resource type that enables you to specify dependencies between data service resources and device groups, cluster file systems, and local file systems.


Note -  Local file systems include UFS, StorageTek QFS, and Oracle Solaris ZFS.

This resource type enables you to bring data services online only after their dependent device groups and file systems are guaranteed to be available. The SUNW.HAStoragePlus resource type provides support for mounting, unmounting, and checking file systems.

Resource groups by themselves do not provide for direct synchronization with disk device groups, cluster file systems, or local file systems. As a result, during a cluster reboot or failover, an attempt to start a data service can occur while its dependent global devices and file systems are still unavailable. Consequently, the data service's START method might time out, and your data service might fail.

The SUNW.HAStoragePlus resource type represents the device groups, cluster, and local file systems that are to be used by one or more data service resources. You add a resource of type SUNW.HAStoragePlus to a resource group and set up dependencies between other resources and the SUNW.HAStoragePlus resource.

If an application resource is configured on top of an HAStoragePlus resource, the application resource must define the offline restart dependency on the underlying HAStoragePlus resource. This ensures the application resource comes online after the dependent HAStoragePlus resource comes online, and goes offline before the HAStoragePlus resource goes offline. For example:

# clresource set \
-p Resource_dependencies_offline_restart=hasp_rs \
applicaton_rs

These dependencies ensure that the data service resources are brought online after the following situations occur:

  1. All specified device services are available (and collocated, if necessary).

  2. All specified file systems are checked and mounted.

The SUNW.HAStoragePlus resource type also provides a fault monitor to monitor the health of the entities managed by the HAStoragePlus resource, including global devices, file systems, and ZFS storage pools. The fault monitor runs fault probes on a regular basis. If one of the entities becomes unavailable, the resource is restarted or a failover to another node is performed.

If more than one entity is monitored, the fault monitor probes them all at the same time. To see a list of what is monitored on global devices, raw device groups, Solaris Volume Manager device groups, file systems, and ZFS storage pools, see Chapter 2, Administering Data Service Resources in Planning and Administering Data Services for Oracle Solaris Cluster 4.4.

The HAStoragePlus resource fault monitor probes the devices and file systems it manages by reading and writing to the file systems. If a read operation is blocked by any software on the I/O stack and the HAStoragePlus resource is required to be online, the user must disable the fault monitor.

An HAStoragePlus resource does not monitor a ZFS file system if the file system has its mountpoint property set to none or legacy, or its canmount property set to off. For all other ZFS file systems, the HAStoragePlus resource fault monitor checks if the file system is mounted. If the file system is mounted, the HAStoragePlus resource then probes the file system's accessibility by reading and writing to it, depending on whether the value of the IOOption property is ReadOnly or ReadWrite.

If the ZFS file system is not mounted or the probe of the file system fails, the resource fault monitor fails and the resource is set to Faulted. The RGM will attempt to restart it, determined by the retry_count and retry_interval properties of the resource. This action results in remounting the file system if the specific mountpoint and canmount property settings described above are not in play. If the fault monitor continues to fail and exceeds the retry_count within the retry_interval, the RGM fails the resource group over to another node.

Standard Properties

The following standard property is associated with the SUNW.HAStoragePlus resource type:

Thorough_probe_interval

Defines the time window (in seconds) between the invocations of the fault probe and the resource.

Category

Optional

Minimum

5

Maximum

3600

Default

180

Tunable

Anytime

Extension Properties

The following extension properties are associated with the SUNW.HAStoragePlus resource type:

AffinityOn

Specifies whether a SUNW.HAStoragePlus resource needs to perform an affinity switchover for all global devices that are defined in the GlobalDevicePaths and FileSystemMountPoints extension properties. You can specify TRUE or FALSE.

Category

Optional

Default

TRUE

Tunable

When disabled

The Zpools extension property ignores the AffinityOn extension property. The AffinityOn extension property is intended for use with the GlobalDevicePaths and FileSystemMountPoints extension properties only.

When you set the AffinityOn extension property to FALSE, the SUNW.HAStoragePlus resource passively waits for the specified global services to become available. In this case, the primary node of each online global device service might not be the same node that is the primary node for the resource group.

The purpose of an affinity switchover is to enhance performance by ensuring the co-location of the device groups and the resource groups on a specific node. Data reads and writes always occur over the device primary paths. Affinity switchovers require the potential primary node list for the resource group and the node list for the device group to be equivalent. The SUNW.HAStoragePlus resource performs an affinity switchover for each device service only once, that is, when the SUNW.HAStoragePlus resource is brought online.

The setting of the AffinityOn flag is ignored for scalable services. Affinity switchovers are not possible with scalable resource groups.

FileSystemCheckCommand

Overrides the check that SUNW.HAStoragePlus conducts on each unmounted file system before attempting to mount it. You can specify an alternate command string or executable, which is invoked on all unmounted file systems.

Category

Optional

Default

NULL

Tunable

Anytime

When a SUNW.HAStoragePlus resource is configured in a scalable resource group, the file-system check on each unmounted cluster file system is omitted. When you set this extension property to NULL, Oracle Solaris Cluster checks UFS by issuing the /usr/sbin/fsck -o p command. Oracle Solaris Cluster checks other file systems by issuing the /usr/sbin/fsck command.

When you set the FileSystemCheckCommand extension property to another command string, SUNW.HAStoragePlus invokes this command string with the file system mount point as an argument. You can specify any arbitrary executable in this manner. A nonzero return value is treated as an error that occurred during the file system check operation. This error causes the START method to fail.

When you do not require a file system check operation, set the FileSystemCheckCommand extension property to /bin/true.

FileSystemMountPoints

Specifies a list of valid file system mount points. You can specify global or local file systems. Global file systems are accessible from all nodes in a cluster. Local file systems are accessible from a single cluster node. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node. These local file systems require the underlying devices to be ZFS storage pools, or in the case of UFS, Oracle Solaris Cluster global devices.

These file system mount points are defined in the format paths[,…].

For non-ZFS file systems, each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.

For non-ZFS file systems, SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You can specify both local and global file system mount points at the same time.

Any non-ZFS file system whose mount point is present in the FileSystemMountPoints extension property is assumed to be local if its /etc/vfstab entry satisfies both of the following conditions:

  1. The non-global mount option is specified.

  2. The “mount at boot” field for the entry is set to “no.”

If a ZFS file system is managed by HAStoragePlus, do not list that file system in /etc/vfstab. For an HAStoragePlus resource configured in the global cluster, do not include ZFS mount points in the FileSystemMountPoints property. ZFS pools that are to be controlled by HAStoragePlus are instead specified using the Zpools property for local file systems and the GlobalZpools property for global file systems.

In a zone cluster, you can configure an HAStoragePlus resource to control the loopback-mounting of file systems from the global cluster. In this case, the FileSystemMountPoints property is used to identify those loopback mountpoints within the zone cluster.

Category

Optional

Default

Empty list

Tunable

Anytime

GlobalDevicePaths

Specifies a list of valid global device group names or global device paths. The paths are defined in the format paths[,…].

Category

Optional

Default

Empty list

Tunable

When disabled

GlobalZpools

Specifies a list of valid ZFS storage pools. These ZFS storage pools are defined in the format of a comma-separated list of pool names. The Zpools and GlobalZpools properties are not permitted to have any common ZFS storage pool names.

The GlobalZpools extension property enables you to specify ZFS storage pools for which the contained filesystem datasets will be globally mounted. The devices that make up a ZFS storage pool must be accessible from all the nodes that are configured in the node list of the resource group to which a SUNW.HAStoragePlus resource belongs. A SUNW.HAStoragePlus resource that manages a ZFS storage pool can only belong to a failover (single-mastered) resource group.

When the SUNW.HAStoragePlus resource is brought online, each ZFS storage pool listed in the GlobalZpools property is imported, and its file systems are mounted globally in the cluster.

When the resource is taken offline on a node, for each ZFS storage pool listed in the GlobalZpools property, all file systems remain globally mounted and the zpool is not exported.


Note -  SUNW.HAStoragePlus does not support file systems created on ZFS volumes.

For more information, see the zpool(8) man page.

Category

Optional

Default

Empty list

Tunable

Anytime

IOOption

Defines the type of I/O performed to probe file systems. The only supported values are ReadOnly and ReadWrite. The ReadOnly value indicates that the fault monitor is allowed to perform read-only I/O on the managed file systems, including the file systems specified in the FileSystemMountPoints property and the ZFS file systems that belong to ZFS storage pools specified in the Zpools property. The ReadWrite value indicates that the fault monitor is allowed to perform both read and write I/O on the managed file systems.

Category

Optional

Default

ReadOnly

Tunable

Anytime

IOTimeout

Defines the time out value (in seconds) for I/O probing.

Category

Optional

Minimum

10

Maximum

3600

Default

300

Tunable

Anytime

Monitor_retry_count

Controls the number of Process Monitor Facility (PMF) restarts allowed for the fault monitor.

Category

Optional

Minimum

1

Default

4

Tunable

Anytime

Monitor_retry_interval

Defines the time interval (in minutes) for fault monitor restarts.

Category

Optional

Minimum

2

Default

2

Tunable

Anytime

RebootOnFailure

Specifies whether to reboot the local system when a failure is detected by a probe. When set to TRUE, all devices that are used by the resource, directly or indirectly, must be monitored by disk-path monitoring.

If RebootOnFailure is set to TRUE and at least one device is found available for each entity specified in the GlobalDevicePaths, FileSystemMountPoints, or Zpools property, the local system is rebooted. The local system refers to the global-cluster node or the zone-cluster node where the resource is online.

Category

Optional

Default

FALSE

Tunable

Anytime

Zpools

Specifies a list of valid ZFS storage pools. These ZFS storage pools are defined in the format of a comma-separated list of pool names. The Zpools and GlobalZpools properties are not permitted to have any common ZFS storage pool names.

The devices that make up a ZFS storage pool must be accessible from all the nodes that are configured in the node list of the resource group to which a SUNW.HAStoragePlus resource belongs. A SUNW.HAStoragePlus resource that manages a ZFS storage pool can only belong to a failover (single-mastered) resource group.

When the SUNW.HAStoragePlus resource is brought online, each ZFS storage pool listed in the Zpools property is imported, and its filesystem datasets are mounted locally by ZFS.

When the resource is taken offline on a node, for each ZFS storage pool listed in the Zpools property, all file systems are unmounted and the ZFS storage pool is exported.


Note -  SUNW.HAStoragePlus does not support file systems created on ZFS volumes.

For more information, see the zpool(8) man page.

Category

Optional

Default

Empty list

Tunable

Anytime

ZpoolsImportOnly

This property is used internally by Oracle Solaris Cluster and cannot be set or modified by the user.

Category

Query-only

Tunable

Never

ZpoolsExportOnStop

This property is used internally by Oracle Solaris Cluster and cannot be set or modified by the user.

Category

Query-only

Tunable

Never

ZpoolsSearchDir

Specifies the location to search for the devices of Zpools. The ZpoolsSearchDir extension property is similar to the -d option of the zpool command.

Category

Optional

Default

/dev/dsk

Tunable

When disabled

Examples

Example 1 Adding a ZFS Storage Pool

The following example shows how to add a ZFS storage pool, newpool, to the GlobalZpools property of the SUNW.HAStoragePlus resource type, myhasp.

# clresource set -p GlobalZpools+=newpool myhasp
Example 2 Removing a ZFS Storage Pool

The following example shows how to remove a ZFS storage pool, pool2, from the GlobalZpools property of the SUNW.HAStoragePlus resource type, myhasp.

# clresource set -p GlobalZpools-=pool2 myhasp

Attributes

See attributes(7) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
ha-cluster/system/core

See Also

rt_reg(5), attributes(7)

Warnings

Make data service resources within a given resource group dependent on a SUNW.HAStoragePlus resource. Otherwise, no synchronization is possible between the data services and the global devices or file systems. Offline restart resource dependencies ensure that the SUNW.HAStoragePlus resource is brought online before other resources. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted only when the resource is brought online.

Enable logging on UFS systems.

Avoid configuring multiple SUNW.HAStoragePlus resources in different resource groups that refer to the same device group and with AffinityOn flags set to TRUE. Redundant device switchovers can occur. As a result, resource and device groups might be dislocated.

Avoid configuring a ZFS storage pool under multiple SUNW.HAStoragePlus resources in different resource groups.

Fault Monitor Errors

The fault monitor monitors the entities managed by the HAStoragePlus resource, including global devices, file systems, and ZFS storage pools. The status of a monitored entity is one of the following:

  • Online - No partial errors or severe errors.

  • Degraded - Partial error.

  • Faulted - Severe error. The Resource Group Manager (RGM) attempts to restart the resource and fail over to another cluster node.

If more than one entity is monitored, the resource's status is determined by the aggregated status of all monitored entities.


Note -  Changing the configuration of managed entities while the fault monitor is running can cause the fault monitor to exit with a failure, which leads to the resource being restarted. You should disable the fault monitor before you make configuration changes to any managed entities and then re-enable the fault monitor. Configuration changes could include removing a ZFS storage pool or a ZFS file system in a pool, or a Solaris Volume Manager disk set or volume.

Notes

The SUNW.HAStoragePlus resource is capable of mounting any cluster file system that is found in an unmounted state.

All file systems are mounted in the overlay mode.

Local file systems are forcibly unmounted.

The waiting time for all device services and file systems to become available is specified by the Prenet_start_timeout property in SUNW.HAStoragePlus. This is a tunable property.