JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Concepts Guide     Oracle Solaris Cluster 3.3 3/13
search filter icon
search icon

Document Information

Preface

1.  Introduction and Overview

2.  Key Concepts for Hardware Service Providers

Oracle Solaris Cluster System Hardware and Software Components

Cluster Nodes

Software Components for Cluster Hardware Members

Multihost Devices

Local Disks

Removable Media

Cluster Interconnect

Public Network Interfaces

Logging Into the Cluster Remotely

Administrative Console

SPARC: Oracle Solaris Cluster Topologies

SPARC: Clustered Pair Topology

SPARC: Pair+N Topology

SPARC: N+1 (Star) Topology

SPARC: N*N (Scalable) Topology

SPARC: Oracle VM Server for SPARC Software Guest Domains: Cluster in a Box Topology

SPARC: Oracle VM Server for SPARC Software Guest Domains: Clusters Span Two Different Hosts Topology

SPARC: Oracle VM Server for SPARC Software Guest Domains: Redundant I/O Domains

x86: Oracle Solaris Cluster Topologies

x86: Clustered Pair Topology

x86: N+1 (Star) Topology

N*N (Scalable Topology)

3.  Key Concepts for System Administrators and Application Developers

Index

SPARC: Oracle Solaris Cluster Topologies

A topology is the connection scheme that connects the Oracle Solaris nodes in the cluster to the storage platforms that are used in an Oracle Solaris Cluster environment. Oracle Solaris Cluster software supports any topology that adheres to the following guidelines:

You can configure Oracle VM Server for SPARC software guest domains and I/O domains as cluster nodes. In other words, you can create a clustered pair, pair+N, N+1, and N*N cluster that consists of any combination of physical machines, I/O domains, and guest domains. You can also create clusters that consist of only guest domains and I/O domains.

Oracle Solaris Cluster software does not require you to configure a cluster by using specific topologies. The following topologies are described to provide the vocabulary to discuss a cluster's connection scheme:

The following sections include sample diagrams of each topology.

SPARC: Clustered Pair Topology

A clustered pair topology is two or more pairs of Oracle Solaris nodes that operate under a single cluster administrative framework. In this configuration, failover occurs only between a pair. However, all nodes are connected by the cluster interconnect and operate under Oracle Solaris Cluster software control. You might use this topology to run a parallel database application on one pair and a failover or scalable application on another pair.

Using the cluster file system, you could also have a two-pair configuration. More than two nodes can run a scalable service or parallel database, even though all the nodes are not directly connected to the disks that store the application data.

The following figure illustrates a clustered pair configuration.

Figure 2-5 SPARC: Clustered Pair Topology

image:The graphic shows a clustered pair configuration with four nodes.

SPARC: Pair+N Topology

The pair+N topology includes a pair of cluster nodes that are directly connected to the following:

The following figure illustrates a pair+N topology where two of the four nodes (Host 3 and Host 4) use the cluster interconnect to access the storage. This configuration can be expanded to include additional nodes that do not have direct access to the shared storage.

Figure 2-6 Pair+N Topology

image:The graphic shows a pair+N topology where two of the four nodes use the cluster interconnect to access the storage.

SPARC: N+1 (Star) Topology

An N+1 topology includes some number of primary cluster nodes and one secondary node. You do not have to configure the primary nodes and secondary node identically. The primary nodes actively provide application services. The secondary node need not be idle while waiting for a primary node to fail.

The secondary node is the only node in the configuration that is physically connected to all the multihost storage.

If a failure occurs on a primary node, Oracle Solaris Cluster fails over the resources to the secondary node. The secondary node is where the resources function until they are switched back (either automatically or manually) to the primary node.

The secondary node must always have enough excess CPU capacity to handle the load if one of the primary nodes fails.

The following figure illustrates an N+1 configuration.

Figure 2-7 SPARC: N+1 Topology

image:The graphic shows an N+1 configuration.

SPARC: N*N (Scalable) Topology

An N*N topology enables every shared storage device in the cluster to connect to every cluster node in the cluster. This topology enables highly available applications to fail over from one node to another without service degradation. When failover occurs, the new node can access the storage device by using a local path instead of the private interconnect.

The following figure illustrates an N*N configuration.

Figure 2-8 SPARC: N*N Topology

image:The graphic shows an N*N topology.

SPARC: Oracle VM Server for SPARC Software Guest Domains: Cluster in a Box Topology

In this Oracle VM Server for SPARC guest domain topology, a cluster and every node within that cluster are located on the same Oracle Solaris host. Each guest domain acts the same as a node in a cluster. This configuration includes three nodes rather than only two.

In this topology, you do not need to connect each virtual switch (VSW) for the private network to a physical network because they need only communicate with each other. In this topology, cluster nodes can also share the same storage device because all cluster nodes are located on the same host or box. To learn more about guidelines for using and installing guest domains or I/O domains in a cluster, see How to Install Oracle VM Server for SPARC Software and Create Domains in Oracle Solaris Cluster Software Installation Guide.


Caution

Caution - The common host or box in this topology represents a single point of failure.


All nodes in the cluster are located on the same host or box. Developers and administrators might find this topology useful for testing and other non-production tasks. This topology is also called a “cluster in a box." Multiple clusters can share the same physical host or box.

The following figure illustrates a cluster in a box configuration.

Figure 2-9 SPARC: Cluster in a Box Topology

image:The graphic shows a cluster in a box configuration. This configuration does not provide high availability.

SPARC: Oracle VM Server for SPARC Software Guest Domains: Clusters Span Two Different Hosts Topology

In this Oracle VM Server for SPARC software guest domain topology, each cluster spans two different hosts and each cluster has one host. Each guest domain acts the same as a host in a cluster. In this configuration, because both clusters share the same interconnect switch, you must specify a different private network address on each cluster. If you specify the same private network address on clusters that share an interconnect switch, the configuration fails.

To learn more about guidelines for using and installing Oracle VM Server for SPARC software guest domains or I/O domains in a cluster, see How to Install Oracle VM Server for SPARC Software and Create Domains in Oracle Solaris Cluster Software Installation Guide.

The following figure illustrates a configuration in which more than a single cluster spans two different hosts.

Figure 2-10 SPARC: Clusters Span Two Different Hosts

image:This graphic shows a configuration where more than one cluster spans two different hosts.

SPARC: Oracle VM Server for SPARC Software Guest Domains: Redundant I/O Domains

In this Oracle VM Server for SPARC software guest domain topology, multiple I/O domains ensure that guest domains, which are configured as cluster nodes, continue to operate if an I/O domain fails. Each guest domain node acts the same as a cluster node in a cluster.

To learn more about guidelines for using and installing guest domains or I/O domains in a cluster, see How to Install Oracle VM Server for SPARC Software and Create Domains in Oracle Solaris Cluster Software Installation Guide.

The following figure illustrates a configuration in which redundant I/O domains ensure that nodes within the cluster continue to operate if an I/O domain fails.

Figure 2-11 SPARC: Redundant I/O Domains

image:This graphic shows how redundant I/O domains ensure that nodes within the cluster continue to operate if an I/O domain fails.