Sun Cluster makes all components on the "path" between users and data highly available, including network interfaces, the applications themselves, the file system, and the multihost disks. In general, a cluster component is highly available if it survives any single (software or hardware) failure in the system.
The following table shows the kinds of Sun Cluster component failures (both hardware and software) and the kinds of recovery built into the high-availability framework.
Table 3-1 Levels of Sun Cluster Failure Detection and Recovery
Failed Cluster Resource |
Software Recovery |
Hardware Recovery |
---|---|---|
Data service |
HA API, HA framework |
N/A |
Public network adapter |
Network Adapter Failover (NAFO) |
Multiple public network adapter cards |
Cluster file system |
Primary and secondary replicas |
Multihost disks |
Mirrored multihost disk |
Volume management (Solstice DiskSuite and VERITAS Volume Manager) |
Hardware RAID-5 (for example, Sun StorEdge A3x00) |
Global device |
Primary and secondary replicas |
Multiple paths to the device, cluster transport junctions |
Private network |
HA transport software |
Multiple private hardware-independent networks |
Node |
CMM, failfast driver |
Multiple nodes |
The Sun Cluster high-availability framework detects a node failure quickly and creates a new equivalent server for the framework resources on a remaining node in the cluster. At no time are all framework resources unavailable. Framework resources unaffected by a crashed node are fully available during recovery. Furthermore, framework resources of the failed node become available as soon as they are recovered. A recovered framework resource does not have to wait for all other framework resources to complete their recovery.
Most highly available framework resources are recovered transparently to the applications (data services) using the resource. The semantics of framework resource access are fully preserved across node failure. The applications simply cannot tell that the framework resource server has been moved to another node. Failure of a single node is completely transparent to programs on remaining nodes using the files, devices, and disk volumes attached to this node, as long as an alternative hardware path exists to the disks from another node. An example is the use of multihost disks that have ports to multiple nodes.
The Cluster Membership Monitor (CMM) is a distributed set of agents, one per cluster member. The agents exchange messages over the cluster interconnect to:
Enforce a consistent membership view on all nodes (quorum)
Drive synchronized reconfiguration in response to membership changes, using registered callbacks
Handle cluster partitioning (split brain, amnesia)
Ensure full connectivity among all cluster members
Unlike previous Sun Cluster releases, CMM runs entirely in the kernel.
The main function of the CMM is to establish cluster-wide agreement on the set of nodes that participates in the cluster at any given time. Sun Cluster refers to this constraint as cluster membership.
To determine cluster membership, and ultimately, ensure data integrity, the CMM:
Accounts for a change in cluster membership, such as a node joining or leaving the cluster
Ensures that a "bad" node leaves the cluster
Ensures that a "bad" node stays out of the cluster until it is repaired
Prevents the cluster from partitioning itself into subsets of nodes
See "Quorum and Quorum Devices" for more information on how the cluster protects itself from partitioning into multiple separate clusters.
To ensure that data is kept safe from corruption, all nodes must reach a consistent agreement on the cluster membership. When necessary, the CMM coordinates a cluster reconfiguration of cluster services (applications) in response to a failure.
The CMM receives information about connectivity to other nodes from the cluster transport layer. The CMM uses the cluster interconnect to exchange state information during a reconfiguration.
After detecting a change in cluster membership, the CMM performs a synchronized configuration of the cluster, where cluster resources might be redistributed based on the new membership of the cluster.
The Cluster Configuration Repository (CCR) is a private, cluster-wide database for storing information pertaining to the configuration and state of the cluster. The CCR is a distributed database. Each node maintains a complete copy of the database. The CCR ensures that all nodes have a consistent view of the cluster "world." To avoid corrupting data, each node needs to know the current state of the cluster resources.
The CCR is implemented in the kernel as a highly available service.
The CCR uses a two-phase commit algorithm for updates: An update must complete successfully on all cluster members or the update is rolled back. The CCR uses the cluster interconnect to apply the distributed updates.
Although the CCR is made up of text files, never edit the CCR files manually. Each file contains a checksum record to ensure consistency. Manually updating CCR files can cause a node or the entire cluster to stop functioning.
The CCR relies on the CMM to guarantee that a cluster is running only when quorum is established. The CCR is responsible for verifying data consistency across the cluster, performing recovery as necessary, and facilitating updates to the data.