A Sun HPC 3.0 cluster can have up to four nodes connected to an SCI-based private subnet. The nodes may connect to the SCI network through one or two SCI adapter cards. When each node in the network has two SCI adapter cards, communication bandwidth can be increased by striping messages across both network interfaces.
See the Sun HPC ClusterTools 3.0 Adminstrator's Guide: With LSF or Sun HPC ClusterTools 3.0 Adminstrator's Guide: With CRE for additional information about configuring a Sun HPC cluster to support message striping.
Chapter 2, Network Connection Procedure describes the procedure for connecting the nodes in the various network topologies described below. Chapter 3, Configuring the SCI Network Interface describes the procedure for configuring the SCI drivers.
Figure 1-2 shows how two nodes in a Sun HPC 3.0 cluster can be connected via an SCI network. The SCI adapter card in one node is connected directly to an SCI adapter card in the other node. There is no intervening SCI switch, which is the usual connection scheme for two-node networks.
If each node has two SCI adapter cards, messages can be striped across the two network interfaces. This is illustrated in lower schematic in "Connecting SCI Adapter Cards in a Two-Node Network".
If you expect to add a node to a two-node network at a later time, you may want to connect the two nodes through a switch now, even though the switch is not needed. This would simplify the process of adding a third node later on. The chief disadvantage to using a switch in a two-node network is the latency it adds to the communication path between the nodes.
This alternate connection scheme is discussed further in "Connecting SCI Adapter Cards in a Two-Node Network".
Figure 1-3 shows examples of how three Sun HPC nodes can be connected to an SCI network, in both unstriped and striped modes.
Figure 1-4 shows examples of how four Sun HPC nodes can be connected to an SCI networ, in both unstriped and striped modes.