When designing your campus cluster, all of the requirements for a standard cluster still apply. Plan your cluster to eliminate any single point of failure in nodes, cluster interconnect, data storage, and public network. Just as in the standard cluster, a campus cluster requires redundant connections and switches. Disk multipathing helps ensure that each node can access each shared storage device. These concerns are universal for Oracle Solaris Cluster configurations.
After you have a valid cluster plan, follow the requirements in this section to ensure a correct campus cluster. To achieve maximum benefits from your campus cluster, consider implementing the Guidelines for Designing a Campus Cluster.
To build a specifications-based campus cluster, contact your Oracle representative, who will assist you with the design and implementation of your specific configuration. This process ensures that the configuration that you implement complies with the specification guidelines, is interoperable, and is supportable.
Your campus cluster must observe all requirements and limitations of the technologies that you choose to use. Determining Campus Cluster Connection Technologies provides a list of tested technologies and their known limitations.
When planning your cluster interconnect, remember that campus clustering requires redundant network connections.
A campus cluster must include at least two rooms using two independent SANs to connect to the shared storage. See Basic Three-Room, Two-Node Campus Cluster Configuration With Multipathing for an illustration of this configuration.
If you are using Oracle Real Application Clusters (Oracle RAC), all nodes that support Oracle RAC must be fully connected to the shared storage devices. Also, all rooms of a specifications-based campus cluster must be fully connected to the shared storage devices.
See Quorum in Clusters With Four Rooms or More for a description of a campus cluster with both direct and indirect storage connections.
Your campus cluster must use SAN-supported storage devices for shared storage. When planning the cluster, ensure that it adheres to the SAN requirements for all storage connections. See the SAN Solutions documentation site (http://www.oracle.com/technetwork/indexes/documentation/index.html) for information about SAN requirements.
Oracle Solaris Cluster software supports two methods of data replication within a cluster: host-based mirroring and storage-based replication.
Host-based mirroring can be used to mirror a campus cluster's shared data. Host-based mirroring can be an inexpensive solution because it uses locally-attached disks and does not require special storage arrays.
If one room of the cluster is lost, another room must be able to provide access to the data. Therefore, mirroring between shared disks must always be performed across rooms, rather than within rooms. Both copies of the data should never be located in a single room.
Solaris Volume Manager and ZFS can be used for host-based mirroring. For more information, see the following documentation:
Solaris Volume Manager – Chapter 4, Configuring Solaris Volume Manager Software in Oracle Solaris Cluster 4.3 Software Installation Guide, Mirroring Solaris Volume Manager Disk Sets Within a Campus Cluster, and Solaris Volume Manager Administration Guide
An alternative to host-based mirroring is storage-based replication, which moves the work of data replication off the cluster nodes and onto the storage device. Storage-based data replication can simplify the infrastructure required, which can be useful in campus cluster configurations.
For more information on both types of data replication and supported software, see Chapter 4, Data Replication Approaches in Oracle Solaris Cluster 4.3 System Administration Guide.
You must use a quorum device for a two-node cluster. For larger clusters, a quorum device is optional. These are standard cluster requirements.
In addition, you can configure quorum devices to ensure that specific rooms can form a cluster in the event of a failure. For guidelines about where to locate your quorum device, see Deciding How to Use Quorum Devices.
If you use Solaris Volume Manager as your volume manager for shared device groups, carefully plan the distribution of your replicas. In two-room configurations, all disk sets should be configured with an additional replica in the room that houses the cluster quorum device.
For example, in three-room two-node configurations, a single room houses both the quorum device and at least one extra disk that is configured in each of the disk sets. Each disk set should have extra replicas in the third room.
Refer to Solaris Volume Manager Administration Guide for details about configuring mirroring of a disk set's replicas.