Skip Navigation Links | |
Exit Print View | |
Oracle Solaris Cluster 4.1 Hardware Administration Manual Oracle Solaris Cluster 4.1 |
1. Introduction to Oracle Solaris Cluster Hardware
2. Installing and Configuring the Terminal Concentrator
3. Installing Cluster Interconnect Hardware and Configuring VLANs
4. Maintaining Cluster Interconnect Hardware
5. Installing and Maintaining Public Network Hardware
6. Maintaining Platform Hardware
7. Campus Clustering With Oracle Solaris Cluster Software
Requirements for Designing a Campus Cluster
Selecting Networking Technologies
Complying With Quorum Device Requirements
Replicating Solaris Volume Manager Disksets
Guidelines for Designing a Campus Cluster
Determining the Number of Rooms in Your Cluster
Three-Room Campus Cluster Examples
Deciding How to Use Quorum Devices
Quorum in Clusters With Four Rooms or More
Quorum in Three-Room Configurations
Quorum in Two-Room Configurations
Determining Campus Cluster Connection Technologies
Cluster Interconnect Technologies
Storage Area Network Technologies
Additional Campus Cluster Configuration Examples
Generally, using interconnect, storage, and Fibre Channel (FC) hardware does not differ markedly from standard cluster configurations.
The steps for installing Ethernet-based campus cluster interconnect hardware are the same as the steps for standard clusters. Refer to Installing Ethernet or InfiniBand Cluster Interconnect Hardware. When installing the media converters, consult the accompanying documentation, including requirements for fiber connections.
The guidelines for installing virtual local area networks interconnect networks are the same as the guidelines for standard clusters. See Configuring VLANs as Private Interconnect Networks.
The steps for installing shared storage are the same as the steps for standard clusters. Refer to the documentation for your storage device for those steps.
Campus clusters require FC switches to mediate between multimode and single-mode fibers. The steps for configuring the settings on the FC switches are very similar to the steps for standard clusters.
If your switch supports flexibility in the buffer allocation mechanism, (for example the QLogic switch with donor ports), make certain you allocate a sufficient number of buffers to the ports that are dedicated to interswitch links (ISLs). If your switch has a fixed number of frame buffers (or buffer credits) per port, you do not have this flexibility.
The following rules determine the number of buffers that you might need:
For 1 Gbps, calculate buffer credits as:
(length-in-km) x (0.6)
Round the result up to the next whole number. For example, a 10 km connection requires 6 buffer credits, and a 7 km connection requires 5 buffer credits.
For 2 Gbps, calculate buffer credits as:
(length-in-km) x (1.2)
Round the result up to the next whole number. For example, a 10 km connection requires 12 buffer credits, while a 7 km connection requires 9 buffer credits.
For greater speeds or for more details, refer to your switch documentation for information about computing buffer credits.