This section contains requirements on interconnect operation when using certain special features.
An interconnect path is one network step in the cluster private network: from a node to a node, from a node to a switch, or from the switch to another node. Each path in your cluster interconnect must use the same networking technology, whether Ethernet, PCI-SCI, or Sun Fire Link.
All interconnect paths must also operate at the same speed. This means, for example, that if you are using Ethernet components that are capable of operating at different speeds, and if your cluster configuration does not allow these components to automatically negotiate a common network speed, you must configure them to operate at the same speed.
When configuring Ethernet switches for your cluster private interconnect, disable the spanning tree algorithm on ports used for the interconnect.
If you are using Ethernet to implement the private interconnect and your cluster is configured with scalable services, then you must configure jumbo frames for both the public and the private networks. The private interconnect must be configured with the same or greater maximum transfer unit (MTU) as the public network.
If you are using SCI or Sun Fire Link to implement the private interconnect, you can use jumbo frames on the public network with no restrictions. For information about how to configure jumbo frames, see the Sun GigaSwift documentation.
All of your cluster interconnect paths must be configured with jumbo frames on when using jumbo frames.
Certain patches are required to use jumbo frames with Sun Cluster. Use the PatchPro tool (http://www.sun.com/PatchPro/) to get these patches.
Do not place an SCI card in the 33 MHz PCI slot (slot 1) of the hot swap PCI+ (hsPCI+) I/O assembly. This placement can cause a system panic.
The following requirements and restrictions apply to Sun Cluster configurations that use InfiniBand adapters:
A two-node cluster must use InfiniBand switches. You cannot directly connect the InfiniBand adapters to each other.
Sun InfiniBand switches support up to nine nodes in a cluster.
Jumbo frames are not supported on a cluster that uses InfiniBand adapters.
If only one InfiniBand adapter is installed on a cluster node, each of its two ports must be connected to a different InfiniBand switch.
If two InfiniBand adapters are installed in a cluster node, leave the second port on each adapter unused. For example, connect port 1 on HCA 1 to switch 1 and connect port 1 on HCA 2 to switch 2.
VLANs are not supported on a cluster that uses InfiniBand switches.