This chapter describes the procedures to install cluster interconnect hardware. Where appropriate, this chapter includes separate procedures for the interconnects that Sun Cluster software supports:
InfiniBand
SPARC: PCI-SCI
This chapter contains the following information:
Installing Ethernet or InfiniBand Cluster Interconnect Hardware
SPARC: Installing Sun Fire Link Cluster Interconnect Hardware
Use the following information to learn more about cluster interconnects:
For conceptual information about cluster interconnects, see Cluster Interconnect in Sun Cluster Concepts Guide for Solaris OS.
For information about how to administer cluster interconnects, see Chapter 6, Administering Cluster Interconnects and Public Networks, in Sun Cluster System Administration Guide for Solaris OS.
This section contains requirements on interconnect operation when using certain special features.
An interconnect path is one network step in the cluster private network: from a node to a node, from a node to a switch, or from the switch to another node. Each path in your cluster interconnect must use the same networking technology, whether Ethernet, PCI-SCI, or Sun Fire Link.
All interconnect paths must also operate at the same speed. This means, for example, that if you are using Ethernet components that are capable of operating at different speeds, and if your cluster configuration does not allow these components to automatically negotiate a common network speed, you must configure them to operate at the same speed.
When configuring Ethernet switches for your cluster private interconnect, disable the spanning tree algorithm on ports used for the interconnect.
If you are using Ethernet to implement the private interconnect and your cluster is configured with scalable services, then you must configure jumbo frames for both the public and the private networks. The private interconnect must be configured with the same or greater maximum transfer unit (MTU) as the public network.
If you are using SCI or Sun Fire Link to implement the private interconnect, you can use jumbo frames on the public network with no restrictions. For information about how to configure jumbo frames, see the Sun GigaSwift documentation.
All of your cluster interconnect paths must be configured with jumbo frames on when using jumbo frames.
Certain patches are required to use jumbo frames with Sun Cluster. Use the PatchPro tool (http://www.sun.com/PatchPro/) to get these patches.
Do not place an SCI card in the 33 MHz PCI slot (slot 1) of the hot swap PCI+ (hsPCI+) I/O assembly. This placement can cause a system panic.
The following requirements and restrictions apply to Sun Cluster configurations that use InfiniBand adapters:
A two-node cluster must use InfiniBand switches. You cannot directly connect the InfiniBand adapters to each other.
Sun InfiniBand switches support up to nine nodes in a cluster.
Jumbo frames are not supported on a cluster that uses InfiniBand adapters.
If only one InfiniBand adapter is installed on a cluster node, each of its two ports must be connected to a different InfiniBand switch.
If two InfiniBand adapters are installed in a cluster node, leave the second port on each adapter unused. For example, connect port 1 on HCA 1 to switch 1 and connect port 1 on HCA 2 to switch 2.
VLANs are not supported on a cluster that uses InfiniBand switches.
Table 3–1 lists procedures about how to install Ethernet cluster interconnect hardware. Perform the procedures in the order that the procedures are listed. This section contains a procedure about how to install cluster hardware during an initial installation of a cluster, before you install Sun Cluster software.
Table 3–1 Task Map: Installing Ethernet Cluster Interconnect Hardware
Task |
For Instructions |
---|---|
Install the transport adapters. |
The documentation that shipped with your nodes and host adapters |
Install the transport cables. |
How to Install Ethernet or InfiniBand Transport Cables and Transport Junctions |
If you have a greater than two-node cluster, install a transport junction (switch). |
How to Install Ethernet or InfiniBand Transport Cables and Transport Junctions |
Use this procedure to install Ethernet or InfiniBand transport cables and transport junctions (switches).
If not already installed, install transport adapters in your cluster nodes.
For the procedure about how to install transport adapters, see the documentation that shipped with your host adapters and node hardware.
Install the transport cables. If necessary, install transport junctions.
(Ethernet Only) As Figure 3–1 shows, a cluster with only two nodes can use a point-to-point connection, requiring no transport junctions.
(Ethernet Only) For a point-to-point connection, you can use either UTP or fibre. With fibre, use a standard patch cable. A crossover cable is unnecessary. With UTP, see your network interface card documentation to determine whether you need a crossover cable.
(Ethernet Only) You can optionally use transport junctions in a two-node cluster. If you use a transport junction in a two-node cluster, you can more easily add additional nodes later. Anytime you use a transport junction, you must have two transport junctions.
As Figure 3–2 shows, cluster with more than two nodes and all clusters using InfiniBand on the interconnect require transport junctions. These transport junctions are Ethernet or InfiniBand switches (customer-supplied).
To install the Sun Cluster software and configure the Sun Cluster software with the new interconnect, see Chapter 2, Installing and Configuring Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS.
To configure jumbo frames on the interconnect, review the requirements in SPARC: Requirements When Using Jumbo Frames and see the Sun GigaSwift documentation for instructions.
Table 3–2 lists procedures about how to install PCI-SCI cluster interconnect hardware. Perform the procedures in the order that the procedures are listed. This section contains a procedure about how to install cluster hardware during an initial installation of a cluster before you install Sun Cluster software.
Table 3–2 SPARC: Task Map: Installing PCI-SCI Cluster Interconnect Hardware
Task |
For Instructions, Go To |
---|---|
Install the transport adapters. |
The documentation that shipped with your nodes and host adapters |
Install the PCI-SCI transport cables. |
SPARC: How to Install PCI-SCI Transport Cables and Transport Junction |
If you have a three-node or four-node cluster, install a PCI-SCI transport junction (switch). |
SPARC: How to Install PCI-SCI Transport Cables and Transport Junction |
Use this procedure to install PCI-SCI transport cables and transport junctions (switches).
When you perform this procedure, the following error messages are displayed on your console.
If you are using Solaris 8:
Nov 13 20:11:43 e04a ip: ip_rput_dlpi(scid0): DL_ERROR_ACK for DL_ENABMULTI_REQ(29), errno 7, unix 0 Nov 13 20:11:43 e04a ip: ip: joining multicasts failed (7) on scid0 - will use link layer broadcasts for multicast |
These error messages are displayed because the associated driver does not support the multicast feature. These error messages are displayed when the ip module probes the driver. Sun Cluster software does not use the multicast feature on the private interconnect. You can safely ignore these error messages.
If you are using Solaris 9:
Dec 4 17:40:14 e03a in.routed[132]: write(rt_sock) RTM_ADD 172.17.0.128/25 -->172.17.0.129 metric=0 flags=0: File exists Dec 4 17:40:19 e03a in.routed[132]: interface scid0 to 172.17.0.129 broken: in=0 ierr=0 out=0 oerr=4 |
These error messages are responses to the way Solaris 9 handles SCI dlpi interfaces. Solaris 9 uses the in.routed routing protocol as the default routing protocol. You can safely ignore these error messages. The in.routed routing protocol is the source of these error messages.
If you are using Solaris 10, no error messages appear in this situation.
If not already installed, install PCI-SCI transport adapters in your cluster nodes.
For the procedure about how to install PCI-SCI transport adapters and set their DIP switches, see the documentation that shipped with your PCI-SCI host adapters and node hardware.
Sbus-SCI host adapters are not supported by Sun Cluster software. If you are upgrading from a Sun Cluster 2.2 cluster, remove any Sbus-SCI host adapters from the cluster nodes. If you do not remove these adapters, you might see panic error messages during the SCI self test.
Install the PCI-SCI transport cables and optionally, transport junctions, depending on how many nodes are in your cluster.
Configuration With Point-to-Point Connections:
A two-node cluster can use a point-to-point connection, requiring no transport junction.
Connect the ends of the cables that are marked SCI Out to the I connectors on the adapters.
Connect the ends of the cables that are marked SCI In to the O connectors of the adapters as shown in the previous diagram.
See the following diagrams for cabling details.
Configuration With Transport Junction:
A three-node or four-node cluster requires SCI transport junctions.
Set the Unit selectors on the fronts of the SCI transport junctions to F. Do not use the X-Ports on the SCI transport junctions.
Connect the ends of the cables that are marked SCI Out to the I connectors on the adapters and the Out connectors on the transport junctions.
Connect the ends of the cables that are marked SCI In to the O connectors of the adapters and In connectors on the transport junctions, as shown in the previous diagram.
See the following diagrams for cabling details. For the procedure about how to install and cable, see the SCI switch documentation that shipped with your hardware switches.
If you have problems with your PCI-SCI interconnect, perform the following tasks:
Verify that the LED on the PCI-SCI transport adapter is blinking green rapidly. For detailed LED interpretations and actions, see the documentation that shipped with your host adapter.
Verify that the PCI-SCI transport adapter card's DIP switch settings are correct. For more information, see the documentation that shipped with your PCI-SCI host adapter.
Verify that the PCI-SCI cables are correctly connected. The PCI-SCI cable connects to the connector that is marked SCI In on the PCI-SCI adapter cards. If you are using transport junctions, the PCI-SCI cable also connects to the Out ports on the SCI transport junctions.
Verify that the PCI-SCI cables are correctly connected. The PCI-SCI cable connects to the connector that is marked SCI Out on the PCI-SCI adapter cards. If you are using transport junctions, the PCI-SCI cable also connects to the In ports on the SCI transport junctions.
Verify that the PCI-SCI switch unit selectors are set to F.
To increase Oracle Real Application Clusters performance, set the max-vc-number parameter in the /etc/system file for each node. Choose the value that corresponds to the number of interconnects in your configuration:
(2 PCI-SCI interconnects) max-vc-number = 32768
(3 PCI-SCI interconnects) max-vc-number = 49152
(4 PCI-SCI interconnects) max-vc-number = 65536
To install the Sun Cluster software and configure the Sun Cluster software with the new interconnect, see Chapter 2, Installing and Configuring Sun Cluster Software, in Sun Cluster Software Installation Guide for Solaris OS.
Table 3–3 lists procedures about how to install Sun Fire Link cluster interconnect hardware. Perform the procedures in the order that the procedures are listed.
Table 3–3 SPARC: Task Map: Installing Sun Fire Link Cluster Interconnect Hardware
Task |
For Instructions |
---|---|
Install the transport adapters (Sun Fire Link optical module). |
Sun Fire Link Hardware Installation Guide |
Install the Sun Fire Link transport cables (Sun Fire Link cables) |
Sun Fire Link Hardware Installation Guide |
If you have a three-node or four-node cluster, install a Sun Fire Link transport junction (Sun Fire Link switch). |
Sun Fire Link Hardware Installation Guide |
Perform the Sun Fire Link software installation. |
Sun Fire Link Software Installation Guide |
Create and activate a dual-controller Sun Fire Link fabric. |
Sun Fire Link Fabric Administrator's Guide |
Sun Cluster software supports the use of private interconnect networks over switch-based virtual local area networks (VLAN). In a switch-based VLAN environment, Sun Cluster software enables multiple clusters and nonclustered systems to share Ethernet transport junction (switch) in two different configurations.
The implementation of switch-based VLAN environments is vendor-specific. Because each switch manufacturer implements VLAN differently, the following guidelines address Sun Cluster software requirements about how to configure VLANs with cluster interconnects.
You must understand your capacity needs before you set up a VLAN configuration. You must know the minimum bandwidth necessary for your interconnect and application traffic.
For the best results, set the Quality of Service (QOS) level for each VLAN to accommodate basic cluster traffic plus the desired application traffic. Ensure that the bandwidth that is allocated to each VLAN extends from node to node.
To determine the basic cluster traffic requirements, use the following equation. In this equation, n equals the number of nodes in the configuration, and s equals the number of switches per VLAN.
n (s-1) x 10Mb |
Interconnect traffic must be placed in the highest-priority queue.
All ports must be equally serviced, similar to a round robin or first-in, first-out model.
You must verify that you have properly configured your VLANs to prevent path timeouts.
The first VLAN configuration enables nodes from multiple clusters to send interconnect traffic across one pair of Ethernet transport junctions. Sun Cluster software requires the use of at least two transport junctions to eliminate a single point of failure. The following figure is an example of the first VLAN configuration in a two-node cluster. VLAN configurations are not limited to two-node clusters.
The second VLAN configuration also uses the same transport junctions for the interconnect traffic of multiple clusters. However, the second VLAN configuration requires two pairs of transport junctions that are connected by links. This configuration enables VLANs to be supported in a campus cluster configuration with the same restrictions as other campus cluster configurations. The following figure illustrates the second VLAN configuration.