A cluster node is a machine running both the Solaris Operating System and Sun Cluster software. A cluster node also is either a current member of the cluster (a cluster member), or a potential member.
SPARC: Sun Cluster software supports one to sixteen nodes in a cluster. See SPARC: Sun Cluster Topologies for SPARC for the supported node configurations.
x86: Sun Cluster software supports two nodes in a cluster. See x86: Sun Cluster Topologies for x86 for the supported node configurations.
Cluster nodes are generally attached to one or more multihost devices. Nodes not attached to multihost devices use the cluster file system to access the multihost devices. For example, one scalable services configuration enables nodes to service requests without being directly attached to multihost devices.
In addition, nodes in parallel database configurations share concurrent access to all the disks.
See Multihost Devices for information about concurrent access to disks.
See SPARC: Clustered Pair Topology for SPARC and x86: Clustered Pair Topology for x86 for more information about parallel database configurations.
All nodes in the cluster are grouped under a common name—the cluster name—which is used for accessing and managing the cluster.
Public network adapters attach nodes to the public networks, providing client access to the cluster.
Cluster members communicate with the other nodes in the cluster through one or more physically independent networks. This set of physically independent networks is referred to as the cluster interconnect.
Every node in the cluster is aware when another node joins or leaves the cluster. Additionally, every node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other cluster nodes.
Nodes in the same cluster should have similar processing, memory, and I/O capability to enable failover to occur without significant degradation in performance. Because of the possibility of failover, every node must have enough excess capacity to support the workload of all nodes for which they are a backup or secondary.
Each node boots its own individual root (/) file system.