This chapter contains information about how to maintain public network hardware. This chapter covers the following topics.
For conceptual information on cluster interconnects and public network interfaces, see your Sun Cluster concepts documentation.
For information on how to administer public network interfaces, see your Sun Cluster system administration documentation
If you use Scalable Data Services and jumbo frames on your public network, ensure that the Maximum Transfer Unit (MTU) of the private network is the same size or larger than the MTU of your public network.
Scalable services cannot forward public network packets that are larger than the MTU size of the private network. The scalable services application instances will not receive those packets.
Consider the following information when configuring jumbo frames:
The maximum MTU size for an InfiniBand interface is typically less than the maximum MTU size for an Ethernet interface.
If you use switches in your private network, ensure they are configured to the MTU sizes of the private network interfaces.
For information about how to configure jumbo frames, see the documentation that shipped with your network interface card. See your Solaris OS documentation or contact your Sun sales representative for other Solaris restrictions.
This section covers installing cluster hardware during an initial cluster installation, before Sun Cluster software is installed.
Physically installing public network adapters to a node in a cluster is no different from adding public network adapters in a noncluster environment.
For the procedure about how to add public network adapters, see the documentation that shipped with your nodes and public network adapters.
Install the cluster software and configure the public network hardware after you have installed all other hardware. To review the task map about how to install cluster hardware, see Installing Sun Cluster Hardware.
If your network uses jumbo frames, review the requirements in Public Network Hardware: Requirements When Using Jumbo Frames and see the Sun GigaSwift documentation for information about how to configure jumbo frames.
The following table lists procedures about how to maintain public network hardware.
Table 5–1 Task Map: Maintaining Public Network Hardware
Task |
Information |
---|---|
Add public network adapters. | |
Replace public network adapters. | |
Remove public network adapters. |
Physically adding public network adapters to a node in a cluster is no different from adding public network adapters in a noncluster environment. For the procedure about how to add public network adapters, see the hardware documentation that shipped with your node and public network adapters.
Once the adapters are physically installed, Sun Cluster requires that they be configured in an IPMP group.
If your network uses jumbo frames, review the requirements in Public Network Hardware: Requirements When Using Jumbo Frames and see the documentation that shipped with your network interface card for information about how to configure jumbo frames.
To add a new public network adapter to an IPMP group, see the IP Network Multipathing Administration Guide.
For cluster-specific commands and guidelines about how to replace public network adapters, see your Sun Cluster system administration documentation.
For procedures about how to administer public network connections, see the IP Network Multipathing Administration Guide.
For the procedure about removing public network adapters, see the hardware documentation that shipped with your node and public network adapters.
To add the new public network adapter to a IPMP group, see your Sun Cluster system administration documentation.
For cluster-specific commands and guidelines about how to remove public network adapters, see your Sun Cluster system administration documentation.
For procedures about how to administer public network connections, see the IP Network Multipathing Administration Guide.
For the procedure about how to remove public network adapters, see the hardware documentation that shipped with your node and public network adapters.
Some Gigabit Ethernet switches require some device parameter values to be set differently than the defaults. Chapter 3 of the Sun Gigabit Ethernet/P 2.0 Adapter Installation and User's Guide describes the procedure about how to change device parameters. If you are using an operating system earlier than the Solaris 10 OS, the procedure that you use on nodes that are running Sun Cluster software varies slightly from the procedure that is described in the guide. In particular, the difference is in how you derive parent names for use in the ge.conf file from the /etc/path_to_inst file.
Chapter 3 of the Sun Gigabit Ethernet/P 2.0 Adapter Installation and User's Guide describes the procedure on how to change ge device parameter values. This change occurs through entries in the /kernel/drv/ge.conf file. The procedure to derive the parent name from the /etc/path_to_inst listing, which is be used in ge.conf entries, appears in Setting Driver Parameters Using a ge.conf File. For example, from the following /etc/path_to_inst line, you can derive the parent name for ge2 to be /pci@4,4000.
"/pci@4,4000/network@4" 2 "ge" |
On Sun Cluster nodes, a /node@nodeid prefix appears in the /etc/path_to_inst line. Do not consider the /node@nodeid prefix when you derive the parent name. For example, on a cluster node, an equivalent /etc/path_to_inst entry would be the following:
"/node@1/pci@4,4000/network@4" 2 "ge" |
The parent name for ge2, to be used in the ge.conf file is still /pci@4,4000 in this instance.
The software driver for the Sun GigaSwift Ethernet adapter is known as the Cassini Ethernet (ce) driver. The Sun Cluster software supports the ce driver for cluster interconnect and public network applications. Consult your Sun service representative for details about the network interface products that are supported.
When you use the ce Sun Ethernet driver for the private cluster interconnect, add the following kernel parameters to the /etc/system file on all the nodes in the cluster to avoid communication problems over the private cluster interconnect.
set ce:ce_taskq_disable=1 set ce:ce_ring_size=1024 set ce:ce_comp_ring_size=4096
If you do not set these three kernel parameters when using the ce driver for the private cluster interconnect, one or more of the cluster nodes might panic due to a loss of communication between the nodes of the cluster. In these cases, check for the following panic messages.
Reservation conflict CMM: Cluster lost operational quorum; aborting CMM: Halting to prevent split brain with node name |
If you are using the ce driver and your cluster interconnect uses a back-to-back connection, do not disable auto-negotiation. If you must disable auto-negotiation, when you want to force 1000 Mbit operation for example, manually specify the link master, or clock master, for the connection.
When manually specifying the link master, you must set one side of the back-to-back connection to provide the clock signal and the other side to use this clock signal. Use the ndd(1M) command to manually specify the link master and follow the guidelines listed below.
Set the link_master or master_cfg_value parameter to 1 (clock master) on one side of the back-to-back connection and to 0 on the other side.
Specify the link_master parameter for ce driver versions up to and including 1.118.
Specify the master_cfg_value parameter for ce driver versions that are released after 1.118.
Set the master_cfg_value parameter to 1.
To determine the version of the ce driver, use the modinfo command, as shown in the following example.
# modinfo | grep ce 84 78068000 4e016 222 1 ce (CE Ethernet Driver v1.148) |
This example shows how to use the ndd command when you want to force 1000 Mbit operation with a back-to-back connection and the version of the ce driver is lower than or equal to 1.118.
# ndd -set /dev/ce link_master 0 |
This example shows how to use the ndd command when you want to force 1000 Mbit operation with a back-to-back connection and the version of the ce driver is greater than or equal to 1.119.
# ndd -set /dev/ce master_cfg_enable 1 # ndd -set /dev/ce master_cfg_value 0 |
If you are using jumbo frames, you must edit the ce.conf file to configure them, as explained in the Sun GigaSwift documentation.
The driver documentation instructs you to grep certain entries from the /etc/path_to_inst file to determine your entries for the ce.conf file. If you are using an operating system earlier than the Solaris 10 OS, the OS modifies the entries on Sun Cluster nodes, adding a node-identifier prefix to them. For example, an entry modified for a Sun Cluster node resembles the following:
# grep ce /etc/path_to_inst "/node@1/pci@8,600000/network@1" 0 "ce" |
When editing the ce.conf file, remove the /node@nodeID identifier prefix from the entries that you put into the driver configuration file. For the example above, the entry to put into the configuration file is:
"/pci@8,600000/network@1" 0 "ce" |