Go to main content

Oracle® Solaris Cluster Hardware Administration Manual

Exit Print View

Updated: February 2017
 
 

Interconnect Requirements and Restrictions

This section contains requirements on interconnect operation when using certain special features.

Cluster Interconnect and Routing

Heartbeat packets that are sent over the cluster interconnect are not IP based. As a result, these packets cannot be routed. If you install a router between two cluster nodes that are connected through cluster interconnects, heartbeat packets cannot find their destination. Your cluster consequently fails to work correctly.

To ensure that your cluster works correctly, you must set up the cluster interconnect in the same layer 2 (data link) network and in the same broadcast domain. The cluster interconnect must be located in the same layer 2 network and broadcast domain even if the cluster nodes are located in different, remote data centers. Cluster nodes that are arranged remotely are described in more detail in Campus Clustering With Oracle Solaris Cluster Software.

Cluster Interconnect Speed Requirements

An interconnect path is one network step in the cluster private network: from a node to a node, from a node to a switch, or from the switch to another node. Each path in your cluster interconnect must use the same networking technology.

All interconnect paths must also operate at the same speed. This means, for example, that if you are using Ethernet components that are capable of operating at different speeds, and if your cluster configuration does not allow these components to automatically negotiate a common network speed, you must configure them to operate at the same speed.

Ethernet Switch Configuration When in the Cluster Interconnect

When configuring Ethernet switches for your cluster private interconnect, disable the spanning tree algorithm on ports that are used for the interconnect.

Requirements When Using Jumbo Frames

If you use Scalable Data Services and jumbo frames on your public network, ensure that the Maximum Transfer Unit (MTU) of the private network is the same size or larger than the MTU of your public network.


Note - Scalable services cannot forward public network packets that are larger than the MTU size of the private network. The scalable services application instances will not receive those packets.

Consider the following information when configuring jumbo frames:

  • The maximum MTU size for an InfiniBand interface is typically less than the maximum MTU size for an Ethernet interface.

  • If you use switches in your private network, ensure they are configured to the MTU sizes of the private network interfaces.


    Caution

    Caution  -  If the switches are not configured to the MTU sizes of the private network interfaces, the cluster interconnect might not stay online.


For information about how to configure jumbo frames, see the documentation that shipped with your network interface card. See your Oracle Solaris OS documentation or contact your Oracle sales representative for other Oracle Solaris restrictions.

Requirements and Restrictions When Using Sun InfiniBand from Oracle in the Cluster Interconnect

    The following requirements and guidelines apply to Oracle Solaris Cluster configurations that use Sun InfiniBand adapters from Oracle:

  • A two-node cluster must use InfiniBand switches. You cannot directly connect the InfiniBand adapters to each other.

  • If only one InfiniBand adapter is installed on a cluster node, each of its two ports must be connected to a different InfiniBand switch.

  • If two InfiniBand adapters are installed in a cluster node, leave the second port on each adapter unused for interconnect purposes. For example, connect port 1 on HCA 1 to switch 1 and connect port 1 on HCA 2 to switch 2 when using these connections as a cluster interconnect.

Requirements for Socket Direct Protocol Over an Oracle Solaris Cluster Interconnect

In an Oracle Solaris Cluster configuration that uses an InfiniBand interconnect, applications can use Socket Direct Protocol (SDP) by configuring SDP to use the clprivnetN network device. If there is a failure at the port of the HCA or switch, Automatic Path Migration (APM) fails over all live SDP sessions to the standby HCA port in a manner that is transparent to the application. APM is a built-in failover facility that is included in the InfiniBand software.

APM cannot be performed if the standby port is connected to a different switch partition, and the application must explicitly reestablish SDP sessions to recover. To ensure that APM can be performed successfully, observe the following requirements:

  • If redundant InfiniBand switches are set up as a cluster interconnect, you must use multiple HCAs. Both ports of an HCA must be connected to the same switch, and only one of the two HCA ports can be configured as a cluster interconnect.

  • If only one InfiniBand switch is set up as a cluster interconnect, you can use only one HCA. Both ports of the HCA must be connected to the same InfiniBand partition on the switch, and both ports can be configured as a cluster interconnect.