Sun Cluster 3.0 12/01 Hardware Guide

Chapter 3 Installing and Maintaining Cluster Interconnect and Public Network Hardware

This chapter describes the procedures for installing and maintaining cluster interconnect and public network hardware. Where appropriate, this chapter includes separate procedures for the two supported varieties of Sun Cluster interconnect: Ethernet and peripheral component interconnect-scalable coherent interface (PCI-SCI).

This chapter contains the following procedures and information for maintaining cluster interconnect and public network hardware:

For conceptual information on cluster interconnects and public network interfaces, see the Sun Cluster 3.0 12/01 Concepts document.

Installing Cluster Interconnect and Public Network Hardware

This section contains procedures for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed. This section contains separate procedures for installing Ethernet-based interconnect hardware, PCI-SCI-based interconnect hardware, and public network hardware.

Installing Ethernet-Based Cluster Interconnect Hardware

Table 3-1 lists procedures for installing Ethernet-based cluster interconnect hardware. Perform the procedures in the order that they are listed. This section contains a procedure for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.

Table 3-1 Task Map: Installing Ethernet-Based Cluster Interconnect Hardware

Task 

For Instructions, Go To 

Install host adapters. 

The documentation that shipped with your nodes and host adapters 

Install the cluster transport cables (and transport junctions for clusters with more than two nodes). 

"How to Install Ethernet-Based Transport Cables and Transport Junctions"

How to Install Ethernet-Based Transport Cables and Transport Junctions

  1. If not already installed, install host adapters in your cluster nodes.

    For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware.

  2. Install the transport cables (and optionally, transport junctions), depending on how many nodes are in your cluster:

    • A cluster with only two nodes can use a point-to-point connection, requiring no cluster transport junctions. Use a point-to-point (crossover) Ethernet cable if you are connecting 100BaseT or TPE ports of a node directly to ports on another node. Gigabit Ethernet uses the standard fiber optic cable for both point-to-point and switch configurations. See Figure 3-1.

      Figure 3-1 Typical Two-Node Cluster Interconnect

      Graphic


      Note -

      If you use a transport junction in a two-node cluster, you can add additional nodes to the cluster without bringing the cluster offline to reconfigure the transport path.


    • A cluster with more than two nodes requires two cluster transport junctions. These transport junctions are Ethernet-based switches (customer-supplied). See Figure 3-2.

      Figure 3-2 Typical Four-Node Cluster Interconnect

      Graphic

Where to Go From Here

You install the cluster software and configure the interconnect after you have installed all other hardware. To review the task map for installing cluster hardware and software, see "Installing Sun Cluster Hardware".

Installing PCI-SCI Cluster Interconnect Hardware

Table 3-2 lists procedures for installing PCI-SCI-based cluster interconnect hardware. Perform the procedures in the order that they are listed. This section contains a procedure for installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.

Table 3-2 Task Map: Installing PCI-SCI Cluster Interconnect Hardware

Task 

For Instructions, Go To 

Install the PCI-SCI transport cables (and PCI-SCI switch for four-node clusters). 

"How to Install PCI-SCI Transport Cables and Switches"

How to Install PCI-SCI Transport Cables and Switches

  1. If not already installed, install PCI-SCI host adapters in your cluster nodes.

    For the procedure on installing PCI-SCI host adapters and setting their DIP switches, see the documentation that shipped with your PCI-SCI host adapters and node hardware.


    Note -

    Sbus-SCI host adapters are not supported by Sun Cluster 3.0. If you are upgrading from a Sun Cluster 2.2 cluster, be sure to remove any Sbus-SCI host adapters from the cluster nodes or you may see panic error messages during the SCI self test.


  2. Install the PCI-SCI transport cables and optionally, switches, depending on how many nodes are in your cluster:

    • A two-node cluster can use a point-to-point connection, requiring no switch. See Figure 3-3.

      Connect the ends of the cables marked "SCI Out" to the "O" connectors on the adapters.

      Connect the ends of the cables marked "SCI In" to the "I" connectors of the adapters as shown in Figure 3-3.

      Figure 3-3 Typical Two-Node PCI-SCI Cluster Interconnect

      Graphic

    • A four-node cluster requires SCI switches. See Figure 3-4 for a cabling diagram. See the SCI switch documentation that came with your hardware for more detailed instructions on installing and cabling the switches.

      Connect the ends of the cables that are marked "SCI Out" to the "O" connectors on the adapters and the "Out" connectors on the switches.

      Connect the ends of the cables that are marked "SCI In" to the "I" connectors of the adapters and "In" connectors on the switches. See Figure 3-4.


      Note -

      Set the Unit selectors on the fronts of the SCI switches to "F." Do not use the "X-Ports" on the SCI switches.


      Figure 3-4 Typical Four-Node PCI-SCI Cluster Interconnect

      Graphic

Troubleshooting PCI-SCI Interconnects

If you have problems with your PCI-SCI interconnect, check the following items:

Where to Go From Here

You install the cluster software and configure the interconnect after you have installed all other hardware. To review the task map for installing cluster hardware, see "Installing Sun Cluster Hardware".

Installing Public Network Hardware

This section covers installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.

Physically installing public network adapters to a node in a cluster is no different from adding public network adapters in a non-cluster environment.

For the procedure on physically adding public network adapters, see the documentation that shipped with your nodes and public network adapters.

Where to Go From Here

You install the cluster software and configure the public network hardware after you have installed all other hardware. To review the task map for installing cluster hardware, see "Installing Sun Cluster Hardware".

Maintaining Cluster Interconnect and Public Network Hardware

The following table lists procedures for maintaining cluster interconnect and public network hardware. The interconnect maintenance procedures in this section are for both Ethernet-based and PCI-SCI interconnects.

Table 3-3 Task Map: Maintaining Cluster Interconnect and Public Network Hardware

Task 

For Instructions, Go To 

Add interconnect host adapters. 

"How to Add Host Adapters"

Replace interconnect host adapters. 

"How to Replace Host Adapters"

Remove interconnect host adapters. 

"How to Remove Host Adapters"

Add transport cables and transport junctions. 

"How to Add Transport Cables and Transport Junctions"

Replace transport cables and transport junctions. 

"How to Replace Transport Cables and Transport Junctions"

Remove transport cables and transport junctions. 

"How to Remove Transport Cables and Transport Junctions"

Add public network adapters. 

"How to Add Public Network Adapters"

Replace public network adapters. 

"How to Replace Public Network Adapters"

Remove public network adapters. 

"How to Remove Public Network Adapters"

Maintaining Interconnect Hardware in a Running Cluster

The maintenance procedures in this section are for both Ethernet-based and PCI-SCI interconnects.

How to Add Host Adapters

This section contains the procedure for adding host adapters to nodes in a running cluster. For conceptual information on host adapters, see the Sun Cluster 3.0 12/01 Concepts document.

  1. Shut down the node in which you are installing the host adapter.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0 
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  2. Power off the node.

    For the procedure on powering off a node, see the documentation that shipped with your node.

  3. Install the host adapter.

    For the procedure on installing host adapters and setting their DIP switches, see the documentation that shipped with your host adapter and node hardware.

  4. Power on and boot the node.


    # boot -r
    

    For the procedures on powering on and booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

When you are finished adding all of your interconnect hardware, if you want to reconfigure Sun Cluster with the new interconnect components, see the Sun Cluster 3.0 12/01 System Administration Guide for instructions on administering the cluster interconnect.

How to Replace Host Adapters

This section contains the procedure for replacing a failed host adapter in a node in a running cluster. For conceptual information on host adapters, see the Sun Cluster 3.0 12/01 Concepts document.


Caution - Caution -

You must maintain at least one cluster interconnect between the nodes of a cluster. The cluster does not function without a working cluster interconnect. You can check the status of the interconnect with the command, scstat -W. For more details on checking the status of the cluster interconnect, see the Sun Cluster 3.0 12/01 System Administration Guide.


  1. Shut down the node with the host adapter you want to replace.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  2. Power off the node.

    For the procedure on powering off your node, see the documentation that shipped with your node.

  3. Disconnect the transport cable from the host adapter and other devices.

    For the procedure on disconnecting cables from host adapters, see the documentation that shipped with your host adapter and node.

  4. Replace the host adapter.

    For the procedure on replacing host adapters, see the documentation that shipped with your host adapter and node.

  5. Reconnect the transport cable to the new host adapter.

    For the procedure on connecting cables to host adapters, see the documentation that shipped with your host adapter and node.

  6. Power on and boot the node.


    # boot -r
    

    For the procedures on powering on and booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

When you are finished replacing all of your interconnect hardware, if you want to reconfigure Sun Cluster with the new interconnect components, see the Sun Cluster 3.0 12/01 System Administration Guide for instructions on administering the cluster interconnect.

How to Remove Host Adapters

This section contains the procedure for removing an unused host adapter from a node in a running cluster. For conceptual information on host adapters, see the Sun Cluster 3.0 12/01 Concepts document.


Caution - Caution -

You must maintain at least one cluster interconnect between the nodes of a cluster. The cluster does not function without a working cluster interconnect.


  1. Verify that the host adapter you want to remove is not configured in the Sun Cluster software configuration.

    • If the host adapter you want to remove appears in the Sun Cluster software configuration, remove the host adapter from the Sun Cluster configuration. To remove a transport path, follow the procedures in the Sun Cluster 3.0 12/01 System Administration Guide before going to Step 2.

    • If the host adapter you want to remove does not appear in the Sun Cluster software configuration, go to Step 2.

  2. Shut down the node that contains the host adapter you want to remove.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. Power off the node.

    For the procedure on powering off a node, see the documentation that shipped with your node.

  4. Disconnect the transport cables from the host adapter you want to remove.

    For the procedure on disconnecting cables from host adapters, see the documentation that shipped with your host adapter and node.

  5. Remove the host adapter.

    For the procedure on removing host adapters, see the documentation that shipped with your host adapter and node.

  6. Power on and boot the node.


    # boot -r
    

    For the procedures on powering on and booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

How to Add Transport Cables and Transport Junctions

This section contains the procedure for adding transport cables and/or transport junctions (switches) in a running cluster.

  1. Shut down the node that is to be connected to the new transport cable and/or transport junction (switch).


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0 
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  2. Install the transport cable and/or transport junction (switch).

  3. Boot the node that you shut down in Step 1.


    # boot -r
    

    For the procedure on booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

When you are finished adding all of your interconnect hardware, if you want to reconfigure Sun Cluster with the new interconnect components, see the Sun Cluster 3.0 12/01 System Administration Guide for instructions on administering the cluster interconnect.

How to Replace Transport Cables and Transport Junctions

This section contains the procedure for replacing failed transport cables and/or transport junctions (switches) in a running cluster.


Caution - Caution -

You must maintain at least one cluster interconnect between the nodes of a cluster. The cluster does not function without a working cluster interconnect.


  1. Shut down the node that is connected to the transport cable or transport junction (switch) you are replacing.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  2. Disconnect the failed transport cable and/or transport junction (switch) from the other cluster devices.

    For the procedure on disconnecting cables from host adapters, see the documentation that shipped with your host adapter and node.

  3. Connect the new transport cable and/or transport junction (switch) to the other cluster devices.

  4. Boot the node that you shut down in Step 1.


    # boot -r
    

    For the procedure on booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

Where to Go From Here

When you are finished replacing all of your interconnect hardware, if you want to reconfigure Sun Cluster with the new interconnect components, see the Sun Cluster 3.0 12/01 System Administration Guide for instructions on administering the cluster interconnect.

How to Remove Transport Cables and Transport Junctions

This section contains the procedure for removing an unused transport cable or transport junction (switch) from a node in a running cluster.


Caution - Caution -

You must maintain at least one cluster interconnect between the nodes of a cluster. The cluster does not function without a working cluster interconnect.


  1. Check to see whether the transport cable and/or transport junction (switch) you want to replace appears in the Sun Cluster software configuration.

    • If the interconnect component you want to remove appears in the Sun Cluster software configuration, remove the interconnect component from the Sun Cluster configuration. To remove an interconnect component, follow the interconnect administration procedures in the Sun Cluster 3.0 System Administration Guide before going to Step 2.

    • If the interconnect component you want to remove does not appear in the Sun Cluster software configuration, go to Step 2.

  2. Shut down the node that is connected to the transport cable and/or transport junction (switch) you are removing.


    # scswitch -S -h nodename
    # shutdown -y -g0 -i0
    

    For the procedure on shutting down a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

  3. Disconnect the transport cables and/or transport junction (switch) from the other cluster devices.

    For the procedure on disconnecting cables from host adapters, see the documentation that shipped with your host adapter and node.

  4. Boot the node.


    # boot -r
    

    For the procedures on powering on and booting a node, see the Sun Cluster 3.0 12/01 System Administration Guide.

Maintaining Public Network Hardware in a Running Cluster

How to Add Public Network Adapters

Physically adding public network adapters to a node in a cluster is no different from adding public network adapters in a non-cluster environment.

For the procedure on physically adding public network adapters, see the hardware documentation that shipped with your node and public network adapters.

Where to Go From Here

To add a new public network adapter to a NAFO group, see the Sun Cluster 3.0 12/01 System Administration Guide.

How to Replace Public Network Adapters

Physically replacing public network adapters to a node in a cluster is no different from replacing public network adapters in a non-cluster environment.

For the procedure on physically replacing public network adapters, see the hardware documentation that shipped with your node and public network adapters.

Where to Go From Here

To add the new public network adapter to a NAFO group, see the Sun Cluster 3.0 12/01 System Administration Guide.

How to Remove Public Network Adapters

Removing public network adapters from a node in a cluster is no different from removing public network adapters in a non-cluster environment. For procedures related to administering public network connections, see the Sun Cluster 3.0 12/01 System Administration Guide.

For the procedure on removing public network adapters, see the hardware documentation that shipped with your node and public network adapters.

Sun Gigabit Ethernet Adapter Considerations

Some Gigabit Ethernet switches require some device parameter values to be set differently than the defaults. Chapter 3 of the Sun Gigabit Ethernet/P 2.0 Adapter Installation and User's Guide describes the procedure for changing device parameters. The procedure used on nodes running Sun Cluster 3.0 software varies slightly from the procedure described in the guide. In particular, the difference is in how you derive parent names for use in the ge.conf file from the /etc/path_to_inst file.

Chapter 3 of the Sun Gigabit Ethernet/P 2.0 Adapter Installation and User's Guide describes the procedure for changing ge device parameter values through entries in the /kernel/drv/ge.conf file. The procedure to derive the parent name from the /etc/path_to_inst listing (to be used in ge.conf entries) appears in "Setting Driver Parameters Using a ge.conf File." For example, from the following /etc/path_to_inst line, you can derive the parent name for ge2 to be /pci@4,4000.


"/pci@4,4000/network@4" 2 "ge"

On Sun Cluster 3.0 nodes, a /node@nodeid prefix appears in the /etc/path_to_inst line. Do not consider the /node@nodeid prefix when you derive the parent name. For example, on a cluster node, an equivalent /etc/path_to_inst entry would be the following:


"/node@1/pci@4,4000/network@4" 2 "ge"

The parent name for ge2, to be used in the ge.conf file is still /pci@4,4000 in this instance.