All nodes must be connected by the cluster interconnect through at least two redundant physically independent networks, or paths, to avoid a single point of failure. While two interconnects are required for redundancy, up to six can be used to spread traffic to avoid bottlenecks and improve redundancy and scalability. The Sun Cluster interconnect uses Fast Ethernet, Gigabit-Ethernet, Sun Fire Link, or the Scalable Coherent Interface (SCI, IEEE 1596-1992), enabling high-performance cluster-private communications.
In clustered environments, high-speed, low-latency interconnects and protocols for internode communications are essential. The SCI interconnect in Sun Cluster systems offers improved performance over typical network interface cards (NICs). Sun Cluster uses the Remote Shared Memory (RSMTM) interface for internode communication across a Sun Fire Link network. RSM is a Sun messaging interface that is highly efficient for remote memory operations.
The cluster interconnect consists of the following hardware components:
Adapters – The network interface cards that reside in each cluster node. A network adapter with multiple interfaces could become a single point of failure if the entire adapter fails.
Junctions – The switches that reside outside of the cluster nodes. Junctions perform pass-through and switching functions to enable you to connect more than two nodes. In a two-node cluster, you do not need junctions because the nodes can be directly connected to each other through redundant physical cables. Those redundant cables are connected to redundant adapters on each node. Greater than two-node configurations require junctions.
Cables – The physical connections that are placed between either two network adapters or an adapter and a junction.
Figure 3–4 shows how the three components are connected.