JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Concepts Guide     Oracle Solaris Cluster 4.1
search filter icon
search icon

Document Information

Preface

1.  Introduction and Overview

2.  Key Concepts for Hardware Service Providers

Oracle Solaris Cluster System Hardware and Software Components

Cluster Nodes

Software Components for Cluster Hardware Members

Multihost Devices

Local Disks

Removable Media

Cluster Interconnect

Public Network Interfaces

Logging Into the Cluster Remotely

Administrative Console

SPARC: Oracle Solaris Cluster Topologies

SPARC: Clustered Pair Topology

SPARC: Pair+N Topology

SPARC: N+1 (Star) Topology

SPARC: N*N (Scalable) Topology

SPARC: Oracle VM Server for SPARC Software Guest Domains: Cluster in a Box Topology

SPARC: Oracle VM Server for SPARC Software Guest Domains: Clusters Span Two Different Hosts Topology

SPARC: Oracle VM Server for SPARC Software Guest Domains: Redundant I/O Domains

x86: Oracle Solaris Cluster Topologies

x86: Clustered Pair Topology

x86: N+1 (Star) Topology

x86: N*N (Scalable)Topology

3.  Key Concepts for System Administrators and Application Developers

Index

Oracle Solaris Cluster System Hardware and Software Components

This information is directed primarily to hardware service providers. These concepts can help service providers understand the relationships between the hardware components before they install, configure, or service cluster hardware. Cluster system administrators might also find this information useful as background to installing, configuring, and administering cluster software.

A cluster is composed of several hardware components, including the following:

Figure 2-1 illustrates how the hardware components work with each other.

Figure 2-1 Oracle Solaris Cluster Hardware Components

image:This graphic shows a two-node cluster with public and private networks, interconnect hardware, local and multihost disks, console, and clients.

Administrative console and console access devices are used to reach the cluster nodes or the terminal concentrator as needed. The Oracle Solaris Cluster software enables you to combine the hardware components into a variety of configurations. The following sections describe these configurations.

Cluster Nodes

An Oracle Solaris host (or simply cluster node) is one of the following hardware or software configurations that runs the Oracle Solaris OS and its own processes:

Depending on your platform, Oracle Solaris Cluster software supports the following configurations:

Cluster nodes are generally attached to one or more multihost storage devices. Nodes that are not attached to multihost devices can use a cluster file system to access the data on multihost devices. For example, one scalable services configuration enables nodes to service requests without being directly attached to multihost devices.

In addition, nodes in parallel database configurations share concurrent access to all the disks.

Public network adapters attach nodes to the public networks, providing client access to the cluster.

Cluster members communicate with the other nodes in the cluster through one or more physically independent networks. This set of physically independent networks is referred to as the cluster interconnect.

Every node in the cluster is aware when another node joins or leaves the cluster. Additionally, every node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other cluster nodes.

Nodes in the same cluster should have the same OS and architecture, as well as similar processing, memory, and I/O capability to enable failover to occur without significant degradation in performance. Because of the possibility of failover, every node must have enough excess capacity to support the workload of all nodes for which they are a backup or secondary.

Software Components for Cluster Hardware Members

To function as a cluster member, an cluster nodes must have the following software installed:

Additional information is available:

The following figure provides a high-level view of the software components that work together to create the Oracle Solaris Clusterenvironment.

Figure 2-2 High-Level Relationship of Oracle Solaris Cluster Components

image:This graphic shows the software components in an Oracle Solaris Cluster environment.

Figure 2-3 shows a high-level view of the software components that work together to create the Oracle Solaris Cluster software environment.

Figure 2-3 Oracle Solaris Cluster Software Architecture

image:This graphic shows Oracle Solaris Cluster software components, such as the RGM, CMM, CCR, volume managers, and the PxFS cluster file system.

Multihost Devices

LUNs that can be connected to more than one cluster node at a time are multihost devices. Greater than two-node clusters do not require quorum devices. A quorum device is a shared storage device or quorum server that is shared by two or more nodes and that contributes votes that are used to establish a quorum. The cluster can operate only when a quorum of votes is available. For more information about quorum and quorum devices, see Quorum and Quorum Devices.

Multihost devices have the following characteristics:

A volume manager can provide software RAID protection for the data residing on the multihost devices.

Combining multihost devices with disk mirroring protects against individual disk failure.

Local Disks

Local disks are the disks that are only connected to a single cluster node. Local disks are therefore not protected against node failure (they are not highly available). However, all disks, including local disks, are included in the global namespace and are configured as global devices. Therefore, the disks themselves are visible from all cluster nodes.

See the section Global Devices for more information about global devices.

Removable Media

Removable media such as tape drives and CD-ROM drives are supported in a cluster. In general, you install, configure, and service these devices in the same way as in a nonclustered environment. Refer to Oracle Solaris Cluster 4.1 Hardware Administration Manual for information about installing and configuring removable media.

See the section Global Devices for more information about global devices.

Cluster Interconnect

The cluster interconnect is the physical configuration of devices that is used to transfer cluster-private communications and data service communications between cluster nodes in the cluster.

Only nodes in the cluster can be connected to the cluster interconnect. The Oracle Solaris Cluster security model assumes that only cluster nodes have physical access to the cluster interconnect.

You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability. If a single interconnect fails, moreover, the cluster is at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure.

The cluster interconnect consists of three hardware components: adapters, junctions, and cables. The following list describes each of these hardware components.

Figure 2-4 shows how the two nodes are connected by a transport adapter, cables, and a transport switch.

Figure 2-4 Cluster Interconnect

image:This graphic shows two nodes connected by a transport adapter, cables, and a transport switch.

Public Network Interfaces

Clients connect to the cluster through the public network interfaces.

You can set up cluster nodes in the cluster to include multiple public network interface cards that perform the following functions:

If one of the adapters fails, IP network multipathing software is called to fail over the defective interface to another adapter in the group. For more information about IPMP, see Chapter 5, Introduction to IPMP, in Managing Oracle Solaris 11.1 Network Performance.

No special hardware considerations relate to clustering for the public network interfaces.

Logging Into the Cluster Remotely

You must have console access to all cluster nodes in the cluster. You can use the Parallel Console Access (pconsole) utility from the command line to log into the cluster remotely. The pconsole utility is part of the Oracle Solaris terminal/pconsole package. Install the package by executing pkg install terminal/pconsole. The pconsole utility creates a host terminal window for each remote host that you specify on the command line. The utility also opens a central, or master, console window that propagates what you input there to each of the connections that you open.

The pconsole utility can be run from within X Windows or in console mode. Install pconsole on the machine that you will use as the administrative console for the cluster. If you have a terminal server connected to your cluster nodes' serial ports (serial consoles), you can access a serial console port by specifying the IP address of the terminal server and relevant terminal server's port (terminal-server's IP:portnumber).

See the pconsole(1) man page for more information.

Administrative Console

You can use a dedicated workstation or administrative console to reach the cluster nodes or the terminal concentrator as needed to administer the active cluster. For more information, see Chapter 1, Introduction to Administering Oracle Solaris Cluster, in Oracle Solaris Cluster System Administration Guide.

You use the administrative console for remote access to the cluster nodes, either over the public network, or optionally through a network-based terminal concentrator.

Oracle Solaris Cluster does not require a dedicated administrative console, but using one provides these benefits: