To function as a cluster member, a Solaris host must have the following software installed:
Sun Cluster software
Data service application
Volume management (SolarisTM Volume Manager or Veritas Volume Manager)
An exception is a configuration that uses volume management on the box. This configuration might not require a software volume manager.
Figure 3–2 shows a high-level view of the software components that work together to create the Sun Cluster software environment.
To ensure that data is safe from corruption, all nodes must reach a consistent agreement on the cluster membership. When necessary, the CMM coordinates a cluster reconfiguration of cluster services in response to a failure.
The CMM receives information about connectivity to other nodes from the cluster transport layer. The CMM uses the cluster interconnect to exchange state information during a reconfiguration.
After detecting a change in cluster membership, the CMM performs a synchronized configuration of the cluster. In this configuration, cluster resources might be redistributed, based on the new membership of the cluster.
The CMM runs entirely in the kernel.
The CCR relies on the CMM to guarantee that a cluster is running only when quorum is established. The CCR is responsible for verifying data consistency across the cluster, performing recovery as necessary, and facilitating updates to the data.
A cluster file system is a proxy between the following:
The kernel on one Solaris host and the underlying file system
The volume manager that is running on a Solaris host that has a physical connection to the disk or disks
Cluster file systems are dependent on shared devices (disks, tapes, CD-ROMs). The shared devices can be accessed from any Solaris host in the cluster through the same file name (for example, /dev/global/). That host does not need a physical connection to the storage device. You can use a shared device the same as a regular device, that is, you can create a file system on a shared device by using newfs or mkfs.
The cluster file system has the following features:
File access locations are transparent. A process can open a file that is located anywhere in the system. Also, processes on all hosts can use the same path name to locate a file.
When the cluster file system reads files, it does not update the access time on those files.
Coherency protocols are used to preserve the UNIX file access semantics even if the file is accessed concurrently from multiple hosts.
Extensive caching is used with zero-copy bulk I/O movement to move file data efficiently.
The cluster file system provides highly available advisory file-locking functionality by using the fcntl(2) interfaces. Applications that run on multiple cluster hosts can synchronize access to data by using advisory file locking on a cluster file system file. File locks are recovered immediately from nodes that leave the cluster, and from applications that fail while holding locks.
Continuous access to data is ensured, even when failures occur. Applications are not affected by failures if a path to disks is still operational. This guarantee is maintained for raw disk access and all file system operations.
Cluster file systems are independent from the underlying file system and volume management software. Cluster file systems make any supported on-disk file system global.