Sun Cluster 3.0 12/01 Hardware Guide

Installing Cluster Interconnect and Public Network Hardware

This section contains procedures for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed. This section contains separate procedures for installing Ethernet-based interconnect hardware, PCI-SCI-based interconnect hardware, and public network hardware.

Installing Ethernet-Based Cluster Interconnect Hardware

Table 3-1 lists procedures for installing Ethernet-based cluster interconnect hardware. Perform the procedures in the order that they are listed. This section contains a procedure for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.

Table 3-1 Task Map: Installing Ethernet-Based Cluster Interconnect Hardware

Task 

For Instructions, Go To 

Install host adapters. 

The documentation that shipped with your nodes and host adapters 

Install the cluster transport cables (and transport junctions for clusters with more than two nodes). 

"How to Install Ethernet-Based Transport Cables and Transport Junctions"

How to Install Ethernet-Based Transport Cables and Transport Junctions

  1. If not already installed, install host adapters in your cluster nodes.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware.

  2. Install the transport cables (and optionally, transport junctions), depending on how many nodes are in your cluster:

    • A cluster with only two nodes can use a point-to-point connection, requiring no cluster transport junctions. Use a point-to-point (crossover) Ethernet cable if you are connecting 100BaseT or TPE ports of a node directly to ports on another node. Gigabit Ethernet uses the standard fiber optic cable for both point-to-point and switch configurations. See Figure 3-1.

      Figure 3-1 Typical Two-Node Cluster Interconnect

      Graphic


      Note -

      If you use a transport junction in a two-node cluster, you can add additional nodes to the cluster without bringing the cluster offline to reconfigure the transport path.


    • A cluster with more than two nodes requires two cluster transport junctions. These transport junctions are Ethernet-based switches (customer-supplied). See Figure 3-2.

      Figure 3-2 Typical Four-Node Cluster Interconnect

      Graphic

Where to Go From Here

You install the cluster software and configure the interconnect after you have installed all other hardware. To review the task map for installing cluster hardware and software, see "Installing Sun Cluster Hardware".

Installing PCI-SCI Cluster Interconnect Hardware

Table 3-2 lists procedures for installing PCI-SCI-based cluster interconnect hardware. Perform the procedures in the order that they are listed. This section contains a procedure for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.

Table 3-2 Task Map: Installing PCI-SCI Cluster Interconnect Hardware

Task 

For Instructions, Go To 

Install the PCI-SCI transport cables (and PCI-SCI switch for four-node clusters). 

"How to Install PCI-SCI Transport Cables and Switches"

How to Install PCI-SCI Transport Cables and Switches

  1. If not already installed, install PCI-SCI host adapters in your cluster nodes.

    For the procedure on installing PCI-SCI host adapters and setting their DIP switches, see the documentation that shipped with your PCI-SCI host adapters and node hardware.


    Note -

    Sbus-SCI host adapters are not supported by Sun Cluster 3.0. If you are upgrading from a Sun Cluster 2.2 cluster, be sure to remove any Sbus-SCI host adapters from the cluster nodes or you may see panic error messages during the SCI self test.


  2. Install the PCI-SCI transport cables and optionally, switches, depending on how many nodes are in your cluster:

    • A two-node cluster can use a point-to-point connection, requiring no switch. See Figure 3-3.

      Connect the ends of the cables marked "SCI Out" to the "O" connectors on the adapters.

      Connect the ends of the cables marked "SCI In" to the "I" connectors of the adapters as shown in Figure 3-3.

      Figure 3-3 Typical Two-Node PCI-SCI Cluster Interconnect

      Graphic

    • A four-node cluster requires SCI switches. See Figure 3-4 for a cabling diagram. See the SCI switch documentation that came with your hardware for more detailed instructions on installing and cabling the switches.

      Connect the ends of the cables that are marked "SCI Out" to the "O" connectors on the adapters and the "Out" connectors on the switches.

      Connect the ends of the cables that are marked "SCI In" to the "I" connectors of the adapters and "In" connectors on the switches. See Figure 3-4.


      Note -

      Set the Unit selectors on the fronts of the SCI switches to "F." Do not use the "X-Ports" on the SCI switches.


      Figure 3-4 Typical Four-Node PCI-SCI Cluster Interconnect

      Graphic

Troubleshooting PCI-SCI Interconnects

If you have problems with your PCI-SCI interconnect, check the following items:

Where to Go From Here

You install the cluster software and configure the interconnect after you have installed all other hardware. To review the task map for installing cluster hardware, see "Installing Sun Cluster Hardware".

Installing Public Network Hardware

This section covers installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.

Physically installing public network adapters to a node in a cluster is no different from adding public network adapters in a non-cluster environment.

For the procedure on physically adding public network adapters, see the documentation that shipped with your nodes and public network adapters.

Where to Go From Here

You install the cluster software and configure the public network hardware after you have installed all other hardware. To review the task map for installing cluster hardware, see "Installing Sun Cluster Hardware".