Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS

Chapter 3 Installing Cluster Interconnect Hardware and Configuring VLANs

This chapter describes the procedures to install cluster interconnect hardware. Where appropriate, this chapter includes separate procedures for the interconnects that Sun Cluster software supports:

This chapter contains the following information:

Use the following information to learn more about cluster interconnects:

Interconnect Requirements and Restrictions

This section contains requirements on interconnect operation when using certain special features.

Cluster Interconnect and Routing

Heartbeat packets that are sent over the cluster interconnect are not IP based. As a result, these packets cannot be routed. If you install a router between two cluster nodes that are connected through cluster interconnects, heartbeat packets cannot find their destination. Your cluster consequently fails to work correctly.

To ensure that your cluster works correctly, you must set up the cluster interconnect in the same layer 2 (data link) network and in the same broadcast domain. The cluster interconnect must be located in the same layer 2 network and broadcast domain even if the cluster nodes are located in different, remote data centers. Cluster nodes that are arranged remotely are described in more detail in Chapter 7, Campus Clustering With Sun Cluster Software.

Cluster Interconnect Speed Requirements

An interconnect path is one network step in the cluster private network: from a node to a node, from a node to a switch, or from the switch to another node. Each path in your cluster interconnect must use the same networking technology, whether Ethernet or peripheral component interconnect-scalable coherent interface (PCI-SCI).

All interconnect paths must also operate at the same speed. This means, for example, that if you are using Ethernet components that are capable of operating at different speeds, and if your cluster configuration does not allow these components to automatically negotiate a common network speed, you must configure them to operate at the same speed.

Ethernet Switch Configuration When in the Cluster Interconnect

When configuring Ethernet switches for your cluster private interconnect, disable the spanning tree algorithm on ports that are used for the interconnect

Requirements When Using Jumbo Frames

If you use Scalable Data Services and jumbo frames on your public network, ensure that the Maximum Transfer Unit (MTU) of the private network is the same size or larger than the MTU of your public network.


Note –

Scalable services cannot forward public network packets that are larger than the MTU size of the private network. The scalable services application instances will not receive those packets.


Consider the following information when configuring jumbo frames:

For information about how to configure jumbo frames, see the documentation that shipped with your network interface card. See your Solaris OS documentation or contact your Sun sales representative for other Solaris restrictions.

Requirements and Restrictions When Using InfiniBand in the Cluster Interconnect

The following requirements and guidelines apply to Sun Cluster Geographic Edition configurations that use InfiniBand adapters:

Restriction on SCI Card Placement

Do not place a Scalable Coherent Interface (SCI) card in the 33 MHz PCI slot (slot 1) of the hot swap PCI+ (hsPCI+) I/O assembly. This placement can cause a system panic.

Installing Ethernet or InfiniBand Cluster Interconnect Hardware

The following table lists procedures for installing Ethernet or InfiniBand cluster interconnect hardware. Perform the procedures in the order that they are listed. This section contains the procedure for installing cluster hardware during an initial installation of a cluster, before you install Sun Cluster software.

Table 3–1 Installing Ethernet Cluster Interconnect Hardware

Task 

For Instructions 

Install the transport adapters. 

The documentation that shipped with your nodes and host adapters 

Install the transport cables. 

How to Install Ethernet or InfiniBand Transport Cables and Transport Junctions

If your cluster contains more than two nodes, install a transport junction (switch). 

How to Install Ethernet or InfiniBand Transport Cables and Transport Junctions

ProcedureHow to Install Ethernet or InfiniBand Transport Cables and Transport Junctions

Use this procedure to install Ethernet or InfiniBand transport cables and transport junctions (switches).

  1. If not already installed, install transport adapters in your cluster nodes.

    See the documentation that shipped with your host adapters and node hardware.

  2. If necessary, install transport junctions and optionally configure the transport junctions' IP addresses.


    Note –

    (InfiniBand Only) If you install one InfiniBand adapter on a cluster node, two InfiniBand switches are required. Each of the two ports must be connected to a different InfiniBand switch.

    If two InfiniBand adapters are connected to a cluster node, connect only one port on each adapter to the InfiniBand switch. The second port of the adapter must remain disconnected. Do not connect ports of the two InfiniBand adapters to the same InfiniBand switch.


  3. Install the transport cables.

    • (Ethernet Only) As the following figure shows, a cluster with only two nodes can use a point-to-point connection, requiring no transport junctions.

      Figure 3–1 (Ethernet Only) Typical Two-Node Cluster Interconnect

      Illustration: shows two nodes that directly connect to
form two interconnects.

      (Ethernet Only) For a point-to-point connection, you can use either UTP or fibre. With fibre, use a standard patch cable. A crossover cable is unnecessary. With UTP, see your network interface card documentation to determine whether you need a crossover cable.


      Note –

      (Ethernet Only) You can optionally use transport junctions in a two-node cluster. If you use a transport junction in a two-node cluster, you can more easily add additional nodes later. To ensure redundancy and availability, always use two transport junctions.


    • As the following figure shows, a cluster with more than two nodes requires transport junctions. These transport junctions are Ethernet or InfiniBand switches (customer-supplied).

      Figure 3–2 Typical Four-Node Cluster Interconnect

      Illustration: shows four nodes and two switches with
one connection to each switch to form two interconnects.

See Also

To install and configure the Sun Cluster software with the new interconnect, see Chapter 2, Installing Software on the Cluster, in Sun Cluster Software Installation Guide for Solaris OS.

(Ethernet Only) To configure jumbo frames on the interconnect, review the requirements in Requirements When Using Jumbo Frames and see the Sun GigaSwift documentation for instructions.

SPARC: Installing PCI-SCI Cluster Interconnect Hardware

Table 3–2 lists procedures about how to install Peripheral Component Interconnect-Scalable Coherent Interface (PCI-SCI) cluster interconnect hardware. Perform the procedures in the order that the procedures are listed. This section contains a procedure about how to install cluster hardware during an initial installation of a cluster before you install Sun Cluster software.

Table 3–2 SPARC: Task Map: Installing PCI-SCI Cluster Interconnect Hardware

Task 

For Instructions, Go To 

Install the transport adapters. 

The documentation that shipped with your nodes and host adapters 

Install the PCI-SCI transport cables. 

SPARC: How to Install PCI-SCI Transport Cables and Transport Junctions

If you have a three-node or four-node cluster, install a PCI-SCI transport junction (switch). 

SPARC: How to Install PCI-SCI Transport Cables and Transport Junctions

ProcedureSPARC: How to Install PCI-SCI Transport Cables and Transport Junctions

Use this procedure to install PCI-SCI transport cables and transport junctions (switches).

When you perform this procedure, the following error messages might be displayed on your console.

  1. If not already installed, install PCI-SCI transport adapters in your cluster nodes.

    For the procedure about how to install PCI-SCI transport adapters and set their DIP switches, see the documentation that shipped with your PCI-SCI host adapters and node hardware.


    Note –

    Sbus-SCI host adapters are not supported by Sun Cluster software. If you are upgrading from a Sun Cluster 2.2 cluster, remove any Sbus-SCI host adapters from the cluster nodes. If you do not remove these adapters, you might see panic error messages during the SCI self test.


  2. Install the PCI-SCI transport cables and optionally, transport junctions, depending on how many nodes are in your cluster.

    • Configuration With Point-to-Point Connections:

      A two-node cluster can use a point-to-point connection, requiring no transport junction.

      1. Connect the ends of the cables that are marked SCI Out to the I connectors on the adapters.

      2. Connect the ends of the cables that are marked SCI In to the O connectors of the adapters as shown in the previous diagram.

      See the following diagrams for cabling details.

    • Configuration With Transport Junction:

      A three-node or four-node cluster requires SCI transport junctions.

      1. Set the Unit selectors on the fronts of the SCI transport junctions to F. Do not use the X-Ports on the SCI transport junctions.

      2. Connect the ends of the cables that are marked SCI Out to the I connectors on the adapters and the Out connectors on the transport junctions.

      3. Connect the ends of the cables that are marked SCI In to the O connectors of the adapters and In connectors on the transport junctions, as shown in the previous diagram.

      See the following diagrams for cabling details. For the procedure about how to install and cable, see the SCI switch documentation that shipped with your hardware switches.

    Figure 3–3 Configuration With Point-to-Point Connections: Two Interconnects

    Illustration: Node A directly connects to Node B via
two PCI-SCI connections to form two interconnects.

    Figure 3–4 Configuration With Point-to-Point Connections: Four Interconnects

    Illustration: Node A directly connects to Node B via
four PCI—SCI connections to form four interconnects.

    Figure 3–5 Configuration With Transport Junction: Two Interconnects

    Illustration: 4 nodes and 2 switches with one connection
to each switch to form 2 interconnects. Each node connects to the same port
on each switch.

    Figure 3–6 Configuration With Transport Junction: Four Interconnects

    Illustration: 4 nodes and 4 switches with one connection
to each switch to form 4 interconnects. Each node connects to the same port
on each switch.

Troubleshooting

If you have problems with your PCI-SCI interconnect, perform the following tasks:

See Also

SPARC: Installing Sun Fire Link Cluster Interconnect Hardware

Table 3–3 lists procedures about how to install Sun Fire Link cluster interconnect hardware. Perform the procedures in the order that the procedures are listed.

Table 3–3 SPARC: Task Map: Installing Sun Fire Link Cluster Interconnect Hardware

Task 

For Instructions 

Install the transport adapters (Sun Fire Link optical module). 

Sun Fire Link Hardware Installation Guide

Install the Sun Fire Link transport cables (Sun Fire Link cables) 

Sun Fire Link Hardware Installation Guide

If you have a three-node or four-node cluster, install a Sun Fire Link transport junction (Sun Fire Link switch). 

Sun Fire Link Hardware Installation Guide

Perform the Sun Fire Link software installation. 

Sun Fire Link Software Installation Guide

Create and activate a dual-controller Sun Fire Link fabric. 

Sun Fire Link Fabric Administrator's Guide

Configuring VLANs as Private Interconnect Networks

Sun Cluster software supports the use of private interconnect networks over switch-based virtual local area networks (VLANs). In a switch-based VLAN environment, Sun Cluster software enables multiple clusters and nonclustered systems to share an Ethernet transport junction (switch) in two different configurations.


Note –

Even if clusters share the same switch, create a separate VLAN for each cluster.

By default, Sun Cluster uses the same set of IP addresses on the private interconnect. Creating a separate VLAN for each cluster ensures that IP traffic from one cluster does not interfere with IP traffic from another cluster. Unless you have customized the default IP address for the private interconnect, as described in How to Change the Private Network Address or Address Range of an Existing Cluster in Sun Cluster System Administration Guide for Solaris OS, create a separate VLAN for each cluster.


The implementation of switch-based VLAN environments is vendor-specific. Because each switch manufacturer implements VLAN differently, the following guidelines address Sun Cluster software requirements with regard to configuring VLANs with cluster interconnects.

The first VLAN configuration enables nodes from multiple clusters to send interconnect traffic across one pair of Ethernet transport junctions. Sun Cluster software requires a minimum of one transport junction, and each transport junction must be part of a VLAN that is located on a different switch. The following figure is an example of the first VLAN configuration in a two-node cluster. VLAN configurations are not limited to two-node clusters.

Figure 3–7 First VLAN Configuration

 Illustration: The preceding context describes the graphic.

The second VLAN configuration uses the same transport junctions for the interconnect traffic of multiple clusters. However, the second VLAN configuration has two pairs of transport junctions that are connected by links. This configuration enables VLANs to be supported in a campus cluster configuration with the same restrictions as other campus cluster configurations. The following figure illustrates the second VLAN configuration.

Figure 3–8 Second VLAN Configuration

 Illustration: The preceding context describes the graphic.