Sun Cluster 2.2 Software Installation Guide

1.3.4.2 Failure Fencing (Solstice DiskSuite)

In Sun Cluster configurations using Solstice DiskSuite as the volume manager, it is Solstice DiskSuite itself that determines cluster quorum and provides failure fencing. There is no distinction between different cluster topologies for failure fencing. That is, two-node and greater than two-node clusters are treated identically. This is possible for two reasons:

Disk fencing is accomplished in the following manner.

  1. After a node is removed from the cluster, a remaining node does a SCSI reserve of the disk. After this, other nodes--including the one no longer in the cluster--are prevented by the disk itself to read or write to the disk. The disk will return a Reservation_Conflict error to the read or write command. In Solstice DiskSuite configurations, the SCSI reserve is accomplished by issuing the Sun multihost ioctl MHIOCTKOWN.

  2. Nodes that are in the cluster continuously enable the MHIOCENFAILFAST ioctl for the disks that they are accessing. This ioctl is a directive to the disk driver, giving the node the capability to panic itself if it cannot access the disk due to the disk being reserved by some other node. The MHIOCENFAILFAST ioctl causes the driver to check the error return from every read and write that this node issues to the disk for the Reservation_Conflict error code, and it also periodically, in background, issues a test operation to the disk to check for Reservation_Conflict. Both the foreground and the background control flow paths will panic should Reservation_Conflict be returned.

  3. The MHIOCENFAILFAST ioctl is not specific to dual-hosted disks. If the node that has enabled the MHIOCENFAILFAST for a disk loses access to that disk due to another node reserving the disk (by SCSI-2 exclusive reserve), the node panics.

This solution to disk fencing relies on the SCSI-2 concept of disk reservation, which requires that a disk be reserved by exactly one single node.

For Solstice DiskSuite configurations, the installation program scinstall(1M) does not prompt for a quorum device, a node preference, or to select a failure fencing policy as is done in SSVM and CVM configurations. When Solstice DiskSuite is specified as the volume manager, you cannot configure direct-attach devices, that is, devices that directly attach to more than two nodes. Disks can only be connected to pairs of nodes.


Note -

Although the scconf(1M) command allows you to specify the +D flag to enable configuring direct-attach devices, you should not do so in Solstice DiskSuite configurations.