Go to main content

Concepts for Oracle® Solaris Cluster 4.4

Exit Print View

Updated: February 2019
 
 

SPARC: Oracle Solaris Cluster Topologies

A topology is the connection scheme that connects the cluster nodes in the cluster to the storage platforms that are used in an Oracle Solaris Cluster environment. Oracle Solaris Cluster software supports any topology that adheres to the following guidelines.

  • An Oracle Solaris Cluster environment that is composed of SPARC-based systems supports from one to sixteen cluster nodes in a cluster. Different hardware configurations impose additional limits on the maximum number of nodes that you can configure in a cluster composed of SPARC based systems.

  • A shared storage device can connect to as many nodes as the storage device supports.

  • Shared storage devices do not need to connect to all nodes of the cluster. However, these storage devices must connect to at least two nodes.

You can configure Oracle VM Server for SPARC Software guest domains and service domains as cluster nodes. In other words, you can create a clustered pair, pair+N, N+1, and N*N cluster that consists of any combination of physical machines, I/O domains, and guest domains. You can also create clusters that consist of only guest domains and I/O domains.

Oracle Solaris Cluster software does not require you to configure a cluster by using specific topologies. The following topologies are described to provide the vocabulary to discuss a cluster's connection scheme. These topologies are typical connection schemes.

  • Clustered pair

  • Pair+N

  • N+1 (star)

  • N*N (scalable)

  • Oracle VM Server for SPARC Software guest domains: cluster in a box

  • Oracle VM Server for SPARC Software guest domains: single cluster spans two different physical cluster hosts (boxes)

  • Oracle VM Server for SPARC Software guest domains: clusters span two different hosts (boxes)

  • Oracle VM Server for SPARC Software guest domains: each guest domain is hosted by redundant I/O domains

The following sections include sample diagrams of each topology.

SPARC: Clustered Pair Topology

A clustered pair topology is two or more pairs of cluster nodes that operate under a single cluster administrative framework. In this configuration, failover occurs only between a pair. However, all nodes are connected by the cluster interconnect and operate under Oracle Solaris Cluster software control. You might use this topology to run a parallel database application on one pair and a failover or scalable application on another pair.

Using the cluster file system, you could also have a two-pair configuration. More than two nodes can run a scalable service or parallel database, even though all the nodes are not directly connected to the disks that store the application data.

The following figure illustrates a clustered pair configuration.

Figure 5  SPARC: Clustered Pair Topology

image:Graphic shows a clustered pair configuration with four nodes.

SPARC: Pair+N Topology

    The pair+N topology includes a pair of cluster nodes that are directly connected to the following:

  • Shared storage.

  • An additional set of nodes that use the cluster interconnect to access shared storage (they have no direct connection themselves).

The following figure illustrates a pair+N topology where two of the four nodes (Host 3 and Host 4) use the cluster interconnect to access the storage. This configuration can be expanded to include additional nodes that do not have direct access to the shared storage.

Figure 6  Pair+N Topology

image:Graphic shows a pair+N topology where two of the four nodes use the cluster interconnect to access the storage.

SPARC: N+1 (Star) Topology

An N+1 topology includes some number of primary cluster nodes and one secondary node. You do not have to configure the primary nodes and secondary node identically. The primary nodes actively provide application services. The secondary node need not be idle while waiting for a primary node to fail.

The secondary node is the only node in the configuration that is physically connected to all the multihost storage.

If a failure occurs on a primary node, Oracle Solaris Cluster fails over the resources to the secondary node. The secondary node is where the resources function until they are switched back (either automatically or manually) to the primary node.

The secondary node must always have enough excess CPU capacity to handle the load if one of the primary nodes fails.

The following figure illustrates an N+1 configuration.

Figure 7  SPARC: N+1 Topology

image:Graphic shows an N+1 configuration.

SPARC: N*N (Scalable) Topology

An N*N topology enables every shared storage device in the cluster to connect to every cluster node in the cluster. This topology enables highly available applications to fail over from one node to another without service degradation. When failover occurs, the new node can access the storage device by using a local path instead of the private interconnect.

The following figure illustrates an N*N configuration.

Figure 8  SPARC: N*N Topology

image:Graphic shows an N*N topology.

SPARC: Oracle VM Server for SPARC Software Guest Domains: Cluster in a Box Topology

In this Oracle VM Server for SPARC software guest domain topology, a cluster and every node within that cluster are located on the same cluster host. Each guest domain acts the same as a node in a cluster. This configuration includes three nodes rather than only two.

In this topology, you do not need to connect each virtual switch (vsw) for the private network to a physical network because they need only communicate with each other. In this topology, cluster nodes can also share the same storage device, as all cluster nodes are located on the same host or box. To learn more about guidelines for using and installing guest domains or I/O domains in a cluster, see How to Install Oracle VM Server for SPARC Software and Create Domains in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.


Caution

Caution  -  The common host/box in this topology represents a single point of failure.


All nodes in the cluster are located on the same host/box. Developers and administrators might find this topology useful for testing and other non-production tasks. This topology is also called a "cluster in a box". Multiple clusters can share the same physical host/box.

The following figure illustrates a cluster in a box configuration.

Figure 9  SPARC: Cluster in a Box Topology

image:Graphic shows a cluster in a box configuration. This configuration does not provide high availability.

SPARC: Oracle VM Server for SPARC Software Guest Domains: Clusters Span Two Different Hosts Topology

In this Oracle VM Server for SPARC software guest domain topology, each cluster spans two different hosts and each cluster has one host. Each guest domain acts the same as a host in a cluster. In this configuration, because both clusters share the same interconnect switch, you must specify a different private network address on each cluster. Otherwise, if you specify the same private network address on clusters that share an interconnect switch, the configuration fails.

To learn more about guidelines for using and installing Oracle VM Server for SPARC software guest domains or I/O domains in a cluster, see How to Install Oracle VM Server for SPARC Software and Create Domains in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.

The following figure illustrates a configuration in which more than a single cluster spans two different hosts.

Figure 10  SPARC: Clusters Span Two Different Hosts

image:Graphic shows a configuration where more than one cluster spans two different hosts.

SPARC: Oracle VM Server for SPARC Software Guest Domains: Redundant Service Domains

In this Oracle VM Server for SPARC Software guest domain topology, multiple service domains ensure that guest domains, which are configured as cluster nodes, continue to operate if a service domain fails. Each guest domain node acts the same as a cluster node in a cluster.

To learn more about guidelines for using and installing guest domains or service domains in a cluster, see How to Install Oracle VM Server for SPARC Software and Create Domains in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.

The following figure illustrates a configuration in which redundant service domains ensure that nodes within the cluster continue to operate if a service domain fails.

Figure 11  Redundant Service Domains

image:Graphic shows how redundant service domains ensure that nodes within                         the cluster continue to operate if an service domain fails.

Only one guest domain cluster can be supported per one VSAN/vHBA combination. To add additional guest domain clusters, more VSANs/vHBAs can be created by using different HBA ports. In this configuration, the number of vHBAs on any one VSAN should be only one as there is no access control per vHBA.

VSAN on N_Port ID Virtualization (NPIV) enables support of VSANs/vHBAs for different guest domain cluster nodes on a shared HBA port. The following figure only shows the storage configuration - the networking requirements are unchanged. This figure is an example of using software mirroring for redundancy. NPIV enables configuration of access control per vHBA. The FC storage data path noted with "NPIV 1" is for Cluster 1, and "NPIV 2" for Cluster 2. Use SAN Zoning and/or Host/LUN mapping to isolate LUNs as needed for the desired configuration, e.g., LUNs for "NPIV 1" from LUNs for "NPIV 2," as well as LUNs within each cluster.

image:Example showing NPIV- VHBA

The following diagram is a similar example, except using MPxIO for redundancy instead.

image:Example showing NPIV- VHBA MPxIO

Note -  Besides cluster nodes with virtual devices for I/O from service domains, a topology where cluster nodes are I/O domains with SR-IOV devices is also supported, including with redundant root domains supplying redundant virtual functions to the I/O domain. For additional information see SPARC: Guidelines for Oracle VM Server for SPARC Logical Domains as Cluster Nodes in Installing and Configuring an Oracle Solaris Cluster 4.4 Environment.