Sun Cluster Concepts Guide for Solaris OS

SPARC: Sun Cluster Topologies

A topology is the connection scheme that connects the Solaris hosts in the cluster to the storage platforms that are used in a Sun Cluster environment. Sun Cluster software supports any topology that adheres to the following guidelines.

You can configure logical domains (LDoms) guest domains and LDoms I/O domains as virtual Solaris hosts. In other words, you can create a clustered pair, pair+N, N+1, and N*N cluster that consists of any combination of physical machines, LDoms I/O domains, and LDoms guest domains. You can also create clusters that consist of only LDoms guest domains, LDoms I/O domains, or any combination of the two.

Sun Cluster software does not require you to configure a cluster by using specific topologies. The following topologies are described to provide the vocabulary to discuss a cluster's connection scheme. These topologies are typical connection schemes.

The following sections include sample diagrams of each topology.

SPARC: Clustered Pair Topology

A clustered pair topology is two or more pairs of Solaris hosts that operate under a single cluster administrative framework. In this configuration, failover occurs only between a pair. However, all hosts are connected by the cluster interconnect and operate under Sun Cluster software control. You might use this topology to run a parallel database application on one pair and a failover or scalable application on another pair.

Using the cluster file system, you could also have a two-pair configuration. More than two hosts can run a scalable service or parallel database, even though all the hosts are not directly connected to the disks that store the application data.

The following figure illustrates a clustered pair configuration.

Figure 2–2 SPARC: Clustered Pair Topology

Illustration: The preceding context describes the graphic.

SPARC: Pair+N Topology

The pair+N topology includes a pair of Solaris hosts that are directly connected to the following:

The following figure illustrates a pair+N topology where two of the four hosts (Host 3 and Host 4) use the cluster interconnect to access the storage. This configuration can be expanded to include additional hosts that do not have direct access to the shared storage.

Figure 2–3 Pair+N Topology

Illustration: The preceding context describes the graphic.

SPARC: N+1 (Star) Topology

An N+1 topology includes some number of primary Solaris hosts and one secondary host. You do not have to configure the primary hosts and secondary host identically. The primary hosts actively provide application services. The secondary host need not be idle while waiting for a primary host to fail.

The secondary host is the only host in the configuration that is physically connected to all the multihost storage.

If a failure occurs on a primary host, Sun Cluster fails over the resources to the secondary host. The secondary host is where the resources function until they are switched back (either automatically or manually) to the primary host.

The secondary host must always have enough excess CPU capacity to handle the load if one of the primary hosts fails.

The following figure illustrates an N+1 configuration.

Figure 2–4 SPARC: N+1 Topology

Illustration: The preceding context describes the graphic.

SPARC: N*N (Scalable) Topology

An N*N topology enables every shared storage device in the cluster to connect to every Solaris host in the cluster. This topology enables highly available applications to fail over from one host to another without service degradation. When failover occurs, the new host can access the storage device by using a local path instead of the private interconnect.

The following figure illustrates an N*N configuration.

Figure 2–5 SPARC: N*N Topology

Illustration: The preceding context describes the graphic.

SPARC: LDoms Guest Domains: Cluster in a Box Topology

In this logical domains (LDoms) guest domain topology, a cluster and every node within that cluster are located on the same Solaris host. Each LDoms guest domain node acts the same as a Solaris host in a cluster. To preclude your having to include a quorum device, this configuration includes three nodes rather than only two.

In this topology, you do not need to connect each virtual switch (vsw) for the private network to a physical network because they need only communicate with each other. In this topology, cluster nodes can also share the same storage device, as all cluster nodes are located on the same host. To learn more about guidelines for using and installing LDoms guest domains or LDoms I/O domains in a cluster, see How to Install Sun Logical Domains Software and Create Domains in Sun Cluster Software Installation Guide for Solaris OS.

This topology does not provide high availability, as all nodes in the cluster are located on the same host. However, developers and administrators might find this topology useful for testing and other non-production tasks. This topology is also called a “cluster in a box”.

The following figure illustrates a cluster in a box configuration.

Figure 2–6 SPARC: Cluster in a Box Topology

Illustration: The preceding context describes the graphic.

SPARC: LDoms Guest Domains: Single Cluster Spans Two Different Hosts Topology

In this logical domains (LDoms) guest domain topology, a single cluster spans two different Solaris hosts and each cluster comprises one node on each host. Each LDoms guest domain node acts the same as a Solaris host in a cluster. To learn more about guidelines for using and installing LDoms guest domains or LDoms I/O domains in a cluster, see How to Install Sun Logical Domains Software and Create Domains in Sun Cluster Software Installation Guide for Solaris OS.

The following figure illustrates a configuration in which a single cluster spans two different hosts.

Figure 2–7 SPARC: Single Cluster Spans Two Different Hosts

Illustration: The preceding context describes the graphic.

SPARC: LDoms Guest Domains: Clusters Span Two Different Hosts Topology

In this logical domains (LDoms) guest domain topology, each cluster spans two different Solaris hosts and each cluster comprises one node on each host. Each LDoms guest domain node acts the same as a Solaris host in a cluster. In this configuration, because both clusters share the same interconnect switch, you must specify a different private network address on each cluster. Otherwise, if you specify the same private network address on clusters that share an interconnect switch, the configuration fails.

To learn more about guidelines for using and installing LDoms guest domains or LDoms I/O domains in a cluster, see How to Install Sun Logical Domains Software and Create Domains in Sun Cluster Software Installation Guide for Solaris OS.

The following figure illustrates a configuration in which more than a single cluster spans two different hosts.

Figure 2–8 SPARC: Clusters Span Two Different Hosts

Illustration: The preceding context describes the graphic.

SPARC: LDoms Guest Domains: Redundant I/O Domains

In this logical domains (LDoms) guest domain topology, multiple I/O domains ensure that guest domains, or nodes within the cluster, continue to operate if an I/O domain fails. Each LDoms guest domain node acts the same as a Solaris host in a cluster.

In this topology, the guest domain runs IP network multipathing (IPMP) across two public networks, one through each I/O domain. Guest domains also mirror storage devices across different I/O domains. To learn more about guidelines for using and installing LDoms guest domains or LDoms I/O domains in a cluster, see How to Install Sun Logical Domains Software and Create Domains in Sun Cluster Software Installation Guide for Solaris OS.

The following figure illustrates a configuration in which redundant I/O domains ensure that nodes within the cluster continue to operate if an I/O domain fails.

Figure 2–9 SPARC: Redundant I/O Domains

Illustration: The preceding context describes the graphic.