When designing your campus cluster, all the requirements for a noncampus cluster still apply. Plan your cluster to eliminate any single point of failure in nodes, cluster interconnect, data storage, and public network. Just as in the standard cluster, a campus cluster requires redundant connections and switches. Disk multipathing is useful to help ensure that each node can always access each shared storage device. These concerns are universal for Sun Cluster.
After you have a valid cluster plan, follow the remaining requirements in this section to ensure a proper campus cluster. To achieve maximum benefits from your campus cluster, consider implementing the SPARC: Guidelines for Designing a Campus Cluster.
Your campus cluster must observe all requirements and limitations of the technologies you choose to use. SPARC: Determining Campus Cluster Interconnect Technologies provides a list of tested technologies and their known limitations.
When planning your cluster interconnect, remember that campus clustering requires redundant physical (not logical) network connections.
A campus cluster must include at least two rooms using two independent SANs to connect to the shared storage. See Figure 7–1 for an illustration of this configuration.
Additional rooms need not be fully connected to the shared storage. However, if you are using Oracle Real Application Clusters (RAC), all nodes which support Oracle RAC must be fully connected to the shared storage devices. See SPARC: Quorum in Clusters With Four Rooms or More for a description of a campus cluster with both direct and indirect storage connections.
Your campus cluster must use SAN-supported storage devices for shared storage. When planning the cluster, ensure that it adheres to the SAN requirements for all storage connections. See the SAN Solutions documentation site for information on SAN requirements.
You must mirror a campus cluster's shared data. If one room of the cluster is lost, another room must be able to provide access to the data. Therefore, data replication between shared disks must always be performed across rooms, rather than within rooms. Both copies of the data should never be in a single room. Host-based mirroring is required for all campus cluster configurations, because hardware RAID alone does not lend itself to providing data redundancy across rooms.
In addition to mirroring your data, you can add storage-based data replication if you judge that your campus cluster needs the additional data redundancy. See Appendix A, Data Replication Approaches for more information on storage-based data replication.
You must use a quorum device for a two-node cluster. For larger clusters, a quorum device is optional. These are standard cluster requirements.
In addition, you can configure quorum devices to ensure that specific rooms can form a cluster in the event of a failure. For guidelines about where to locate your quorum device, see SPARC: Deciding How to Use Quorum Devices.
If you use Solstice DiskSuite/Solaris Volume Manager as your volume manager for shared device groups, carefully plan the distribution of your replicas. In two-room configurations, all disksets should be configured with an additional replica in the room that houses the cluster quorum device. For example, in three-room two-node configurations, a single room houses both the quorum device and at least one extra disk configured into each of the disksets. Each diskset should have extra replicas in the third room.
You can use a quorum disk for these replicas.
Refer to your Solstice DiskSuite/Solaris Volume Manager documentation for details on configuring diskset replicas.