JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Reference Manual     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

Introduction

OSC33 1

OSC33 1cl

OSC33 1ha

OSC33 1m

OSC33 3ha

OSC33 4

OSC33 5

crs_framework(5)

derby(5)

property_attributes(5)

Proxy_SMF_failover(5)

Proxy_SMF_multimaster(5)

Proxy_SMF_scalable(5)

rac_cvm(5)

rac_framework(5)

rac_svm(5)

rac_udlm(5)

rg_properties(5)

r_properties(5)

rt_properties(5)

scalable_service(5)

ScalDeviceGroup(5)

ScalMountPoint(5)

SCTelemetry(5)

SUNW.crs_framework(5)

SUNW.derby(5)

SUNW.Event(5)

SUNW.gds(5)

SUNW.HAStoragePlus(5)

SUNW.Proxy_SMF_failover(5)

SUNW.Proxy_SMF_multimaster(5)

SUNW.Proxy_SMF_scalable(5)

SUNW.rac_cvm(5)

SUNW.rac_framework(5)

SUNW.rac_svm(5)

SUNW.rac_udlm(5)

SUNW.ScalDeviceGroup(5)

SUNW.ScalMountPoint(5)

SUNW.SCTelemetry(5)

SUNW.vucmm_cvm(5)

SUNW.vucmm_framework(5)

SUNW.vucmm_svm(5)

vucmm_cvm(5)

vucmm_framework(5)

vucmm_svm(5)

OSC33 5cl

OSC33 7

OSC33 7p

Index

SUNW.HAStoragePlus

- resource type that enforces dependencies between Oracle Solaris Cluster device services, file systems, and data services and monitors those entities

Description

SUNW.HAStoragePlus describes a resource type that enables you to specify dependencies between data service resources and device groups, cluster file systems, and local file systems.


Note - Local file systems include the UNIX File System (UFS), Quick File System (QFS), Veritas File System (VxFS), and Solaris ZFS (Zettabyte File System).


This resource type enables you to bring data services online only after their dependent device groups and file systems are guaranteed to be available. The SUNW.HAStoragePlus resource type provides support for mounting, unmounting, and checking file systems.

Resource groups by themselves do not provide for direct synchronization with disk device groups, cluster file systems, or local file systems. As a result, during a cluster reboot or failover, an attempt to start a data service can occur while its dependent global devices and file systems are still unavailable. Consequently, the data service's START method might time out, and your data service might fail.

The SUNW.HAStoragePlus resource type represents the device groups, cluster, and local file systems that are to be used by one or more data service resources. You add a resource of type SUNW.HAStoragePlus to a resource group and set up dependencies between other resources and the SUNW.HAStoragePlus resource.

If an application resource is configured on top of an HAStoragePlus resource, the application resource must define the offline restart dependency on the underlying HAStoragePlus resource. This ensures the application resource comes online after the dependent HAStoragePlus resource comes online, and goes offline before the HAStoragePlus resource goes offline. For example:

# clrs set -p Resource_dependencies_offline_restart=hasp_rs applicaton_rs

These dependencies ensure that the data service resources are brought online after the following situations occur:

  1. All specified device services are available (and collocated, if necessary).

  2. All specified file systems are checked and mounted.

You can also use the SUNW.HAStoragePlus resource type to access a local file system from a non-global zone.

The SUNW.HAStoragePlus resource type also provides a fault monitor to monitor the health of the entities managed by the HASP resource, including global devices, file systems, and ZFS storage pools. The fault monitor runs fault probes on a regular basis. If one of the entities becomes unavailable, the resource is restarted or a failover to another node is performed.

If more than one entity is monitored, the fault monitor probes them all at the same time. To see a list of what is monitored on global devices, raw device groups, Oracle Solaris Volume Manager device groups, VxVM device groups, file systems, and ZFS storage pools, see Chapter 2, Administering Data Service Resources, in Oracle Solaris Cluster Data Services Planning and Administration Guide.


Note - Version 9 of the HAStoragePlus resource fault monitor probes the devices and file systems it manages by reading and writing to the file systems. If a read operation is blocked by any software on the I/O stack and the HAStoragePlus resource is required to be online, the user must disable the fault monitor. For example, you must unmonitor the HAStoragePlus resource managing the AVS Remote Replication volumes because AVS blocks reading from any bitmap volume or any data volume in the NEED SYNC state. The HAStoragePlus resource managing the AVS volumes must be online at all times.


Standard Properties

The following standard property is associated with the SUNW.HAStoragePlus resource type:

Through_Probe_Interval

Defines the time window (in seconds) between the invocations of the fault probe and the resource.

Category

Optional

Minimum

5

Maximum

3600

Default

180

Tunable

Anytime

Extension Properties

The following extension properties are associated with the SUNW.HAStoragePlus resource type:

AffinityOn

Specifies whether a SUNW.HAStoragePlus resource needs to perform an affinity switchover for all global devices that are defined in the GlobalDevicePaths and FilesystemMountPoints extension properties. You can specify TRUE or FALSE. Affinity switchover is set by default, that is, AffinityOn is set to TRUE.

The Zpools extension property ignores the AffinityOn extension property. The AffinityOn extension property is intended for use with the GlobalDevicePaths and FilesystemMountPoints extension properties only.

When you set the AffinityOn extension property to FALSE, the SUNW.HAStoragePlus resource passively waits for the specified global services to become available. In this case, the primary node or zone of each online global device service might not be the same node or zone that is the primary node for the resource group.

The purpose of an affinity switchover is to enhance performance by ensuring the co-location of the device groups and the resource groups on a specific node or zone. Data reads and writes always occur over the device primary paths. Affinity switchovers require the potential primary node list for the resource group and the node list for the device group to be equivalent. The SUNW.HAStoragePlus resource performs an affinity switchover for each device service only once, that is, when the SUNW.HAStoragePlus resource is brought online.

The setting of the AffinityOn flag is ignored for scalable services. Affinity switchovers are not possible with scalable resource groups.

FilesystemCheckCommand

Overrides the check that SUNW.HAStoragePlus conducts on each unmounted file system before attempting to mount it. You can specify an alternate command string or executable, which is invoked on all unmounted file systems.

When a SUNW.HAStoragePlus resource is configured in a scalable resource group, the file-system check on each unmounted cluster file system is omitted.

The default value for the FilesystemCheckCommand extension property is NULL. When you set this extension property to NULL, Oracle Solaris Cluster checks UFS or VxFS by issuing the /usr/sbin/fsck -o p command. Oracle Solaris Cluster checks other file systems by issuing the /usr/sbin/fsck command. When you set the FilesystemCheckCommand extension property to another command string, SUNW.HAStoragePlus invokes this command string with the file system mount point as an argument. You can specify any arbitrary executable in this manner. A nonzero return value is treated as an error that occurred during the file system check operation. This error causes the START method to fail. When you do not require a file system check operation, set the FilesystemCheckCommand extension property to /bin/true.

FilesystemMountPoints

Specifies a list of valid file system mount points. You can specify global or local file systems. Global file systems are accessible from all nodes or zones in a cluster. Local file systems are accessible from a single cluster node or zone. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted on a single cluster node or zone. These local file systems require the underlying devices to be Oracle Solaris Cluster global devices.

These file system mount points are defined in the format paths[,...]. You can specify both the path in a non-global zone and the path in a global zone, in this format:

Non-GlobalZonePath:GlobalZonePath

The global zone path is optional. If you do not specify a global zone path, Oracle Solaris Cluster assumes that the path in the non-global zone and in the global zone are the same. If you specify the path as Non-GlobalZonePath:GlobalZonePath, you must specify GlobalZonePath in the global zone's /etc/vfstab.

The default setting for this property is an empty list.

You can use the SUNW.HAStoragePlus resource type to make a file system available to a non-global zone. To enable the SUNW.HAStoragePlus resource type to do this, you must create a mount point in the global zone and in the non-global zone. The SUNW.HAStoragePlus resource type makes the file system available to the non-global zone by mounting the file system in the global zone. The resource type then performs a loopback mount in the non-global zone.

Each file system mount point should have an equivalent entry in /etc/vfstab on all cluster nodes and in all global zones. The SUNW.HAStoragePlus resource type does not check /etc/vfstab in non-global zones.

SUNW.HAStoragePlus resources that specify local file systems can only belong in a failover resource group with affinity switchovers enabled. These local file systems can therefore be termed failover file systems. You can specify both local and global file system mount points at the same time.

Any file system whose mount point is present in the FilesystemMountPoints extension property is assumed to be local if its /etc/vfstab entry satisfies both of the following conditions:

  1. The non-global mount option is specified.

  2. The “mount at boot” field for the entry is set to “no.”

An Oracle Solaris ZFS is always a local file system. Do not list a ZFS in /etc/vfstab. Also, do not include ZFS mount points in the FilesystemMountPoints property.

GlobalDevicePaths

Specifies a list of valid global device group names or global device paths. The paths are defined in the format paths[,...]. The default setting for this property is an empty list.

IOOption

Defines the type of I/O performed to probe file systems. The only supported values are ReadOnly and ReadWrite. The ReadOnly value indicates that the fault monitor is allowed to perform read-only I/O on the managed file systems, including the file systems specified in the FilesystemMountPoints property and the ZFS file systems that belong to ZFS storage pools specified in the Zpools property. The ReadWrite value indicates that the fault monitor is allowed to perform both read and write I/O on the managed file systems.

Category

Optional

Default

ReadOnly

Tunable

Anytime

IOTimeout

Defines the time out value (in seconds) for I/O probing.

Category

Optional

Minimum

10

Maximum

3600

Default

300

Tunable

Anytime

Monitor_Retry_Count

Controls the number of Process Monitor Facility (PMF) restarts allowed for the fault monitor.

Category

Optional

Minimum

1

Default

4

Tunable

Anytime

Monitor_Retry_Interval

Defines the time interval (in minutes) for fault monitor restarts.

Category

Optional

Minimum

2

Default

2

Tunable

Anytime

Zpools

Specifies a list of valid ZFS storage pools, each of which contains at least one ZFS. These ZFS storage pools are defined in the format paths[,...]. The default setting for this property is an empty list. All file systems in a ZFS storage pool are mounted and unmounted together.

The Zpools extension property enables you to specify ZFS storage pools. The devices that make up a ZFS storage pool must be accessible from all the nodes or zones that are configured in the node list of the resource group to which a SUNW.HAStoragePlus resource belongs. A SUNW.HAStoragePlus resource that manages a ZFS storage pool can only belong to a failover resource group. When a SUNW.HAStoragePlus resource that manages a ZFS storage pool is brought online, the ZFS storage pool is imported, and every file system that the ZFS storage pool contains is mounted. When the resource is taken offline on a node, for each managed ZFS storage pool, all file systems are unmounted and the ZFS storage pool is exported.


Note - SUNW.HAStoragePlus does not support file systems created on ZFS volumes.


ZpoolsSearchDir

Specifies the location to search for the devices of Zpools. The default value for the ZpoolsSearchDir extension property is /dev/dsk. The ZpoolsSearchDir extension property is similar to the -d option of the zpool command.

Attributes

See attributes(5) for descriptions of the following attributes:

ATTRIBUTE TYPE
ATTRIBUTE VALUE
Availability
SUNWscu

See Also

rt_reg(4), attributes(5)

Warnings

Make data service resources within a given resource group dependent on a SUNW.HAStoragePlus resource. Otherwise, no synchronization is possible between the data services and the global devices or file systems. Offline restart resource dependencies ensure that the SUNW.HAStoragePlus resource is brought online before other resources. Local file systems that are managed by a SUNW.HAStoragePlus resource are mounted only when the resource is brought online.

Enable logging on UFS systems.

Avoid configuring multiple SUNW.HAStoragePlus resources in different resource groups that refer to the same device group and with AffinityOn flags set to TRUE. Redundant device switchovers can occur. As a result, resource and device groups might be dislocated.

Avoid configuring a ZFS storage pool under multiple SUNW.HAStoragePlus resources in different resource groups.

Fault Monitor Errors

The fault monitor monitors the entities managed by the HASP resource, including global devices, file systems, and ZFS storage pools. The status of a monitored entity is one of the following:

If more than one entity is monitored, the resource's status is determined by the aggregated status of all monitored entities.


Note - Changing the configuration of managed entities while the fault monitor is running can cause the fault monitor to exit with a failure, which leads to the resource being restarted. You should disable the fault monitor before you make configuration changes to any managed entities and then re-enable the fault monitor. Configuration changes could include removing a ZFS storage pool or a ZFS file system in a pool, an Oracle Solaris Volume Manager diskset or volume, or a VxVM disk group or volume.


Notes

The SUNW.HAStoragePlus resource is capable of mounting any cluster file system that is found in an unmounted state.

All file systems are mounted in the overlay mode.

Local file systems are forcibly unmounted.

The waiting time for all device services and file systems to become available is specified by the Prenet_Start_Timeout property in SUNW.HAStoragePlus. This is a tunable property.