JavaScript is required to for searching.
Skip Navigation Links
Exit Print View
Oracle Solaris Cluster Concepts Guide     Oracle Solaris Cluster
search filter icon
search icon

Document Information

Preface

1.  Introduction and Overview

2.  Key Concepts for Hardware Service Providers

Oracle Solaris Cluster System Hardware and Software Components

Cluster Nodes

Software Components for Cluster Hardware Members

Multihost Devices

Multi-Initiator SCSI

Local Disks

Removable Media

Cluster Interconnect

Public Network Interfaces

Client Systems

Console Access Devices

Administrative Console

SPARC: Oracle Solaris Cluster Topologies

SPARC: Clustered Pair Topology

SPARC: Pair+N Topology

SPARC: N+1 (Star) Topology

SPARC: N*N (Scalable) Topology

SPARC: LDoms Guest Domains: Cluster in a Box Topology

SPARC: LDoms Guest Domains: Single Cluster Spans Two Different Hosts Topology

SPARC: LDoms Guest Domains: Clusters Span Two Different Hosts Topology

SPARC: LDoms Guest Domains: Redundant I/O Domains

x86: Oracle Solaris Cluster Topologies

x86: Clustered Pair Topology

x86: N+1 (Star) Topology

3.  Key Concepts for System Administrators and Application Developers

Index

Oracle Solaris Cluster System Hardware and Software Components

This information is directed primarily to hardware service providers. These concepts can help service providers understand the relationships between the hardware components before they install, configure, or service cluster hardware. Cluster system administrators might also find this information useful as background to installing, configuring, and administering cluster software.

A cluster is composed of several hardware components, including the following:

Figure 2-1 illustrates how the hardware components work with each other.

Figure 2-1 Oracle Solaris Cluster Hardware Components

image:Illustration: A two-host cluster with public and private networks, interconnect hardware, local and multihost disks, console, and clients.

The Oracle Solaris Cluster software enables you to combine the hardware components into a variety of configurations. The following sections describe these configurations.

Cluster Nodes

A node is an Oracle Solaris zone that is associated with a cluster. In this environment, an Oracle Solaris host (or simply host) is one of the following hardware or software configurations that runs the Oracle Solaris OS and its own processes:

Depending on your platform, Oracle Solaris Cluster software supports the following configurations:

Oracle Solaris hosts are generally attached to one or more multihost devices. Hosts that are not attached to multihost devices can use a cluster file system to access the multihost devices. For example, one scalable services configuration enables hosts to service requests without being directly attached to multihost devices.

In addition, hosts in parallel database configurations share concurrent access to all the disks.

All nodes in the cluster are grouped under a common name (the cluster name), which is used for accessing and managing the cluster.

Public network adapters attach hosts to the public networks, providing client access to the cluster.

Cluster members communicate with the other hosts in the cluster through one or more physically independent networks. This set of physically independent networks is referred to as the cluster interconnect.

Every node in the cluster is aware when another node joins or leaves the cluster. Additionally, every node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other cluster nodes.

Hosts in the same cluster should have similar processing, memory, and I/O capability to enable failover to occur without significant degradation in performance. Because of the possibility of failover, every host must have enough excess capacity to support the workload of all hosts for which they are a backup or secondary.

Each host boots its own individual root (/) file system.

Software Components for Cluster Hardware Members

To function as a cluster member, an Oracle Solaris host must have the following software installed:

The following figure provides a high-level view of the software components that work together to create the Oracle Solaris Cluster environment.

Figure 2-2 High-Level Relationship of Oracle Solaris Cluster Components

image:Illustration: The preceding context describes the graphic.

Figure 2-3 shows a high-level view of the software components that work together to create the Oracle Solaris Cluster software environment.

Figure 2-3 Oracle Solaris Cluster Software Architecture

image:Illustration: Oracle Solaris Cluster software components, such as the RGM, CMM, CCR, volume managers, and the PxFS cluster file system.

Multihost Devices

Disks that can be connected to more than one Oracle Solaris host at a time are multihost devices. In the Oracle Solaris Cluster environment, multihost storage makes disks highly available. Oracle Solaris Cluster software requires multihost storage for two-host clusters to establish quorum. Greater than two-host clusters do not require quorum devices. For more information about quorum, see Quorum and Quorum Devices.

Multihost devices have the following characteristics:

A volume manager provides for mirrored or RAID-5 configurations for data redundancy of the multihost devices. Currently, Oracle Solaris Cluster supports Solaris Volume Manager and Veritas Volume Manager as volume managers, and the RDAC RAID-5 hardware controller on several hardware RAID platforms.

Combining multihost devices with disk mirroring and disk striping protects against both host failure and individual disk failure.

Multi-Initiator SCSI

This section applies only to SCSI storage devices and not to Fibre Channel storage that is used for the multihost devices.

In a standalone (that is, non-clustered) host, the host controls the SCSI bus activities by way of the SCSI host adapter circuit that connects this host to a particular SCSI bus. This SCSI host adapter circuit is referred to as the SCSI initiator. This circuit initiates all bus activities for this SCSI bus. The default SCSI address of SCSI host adapters in Oracle's Sun systems is 7.

Cluster configurations share storage between multiple hosts, using multihost devices. When the cluster storage consists of single-ended or differential SCSI devices, the configuration is referred to as multi-initiator SCSI. As this terminology implies, more than one SCSI initiator exists on the SCSI bus.

The SCSI specification requires each device on a SCSI bus to have a unique SCSI address. (The host adapter is also a device on the SCSI bus.) The default hardware configuration in a multi-initiator environment results in a conflict because all SCSI host adapters default to 7.

To resolve this conflict, on each SCSI bus, leave one of the SCSI host adapters with the SCSI address of 7, and set the other host adapters to unused SCSI addresses. Proper planning dictates that these “unused” SCSI addresses include both currently and eventually unused addresses. An example of addresses unused in the future is the addition of storage by installing new drives into empty drive slots.

In most configurations, the available SCSI address for a second host adapter is 6.

You can change the selected SCSI addresses for these host adapters by using one of the following tools to set the scsi-initiator-id property:

You can set this property globally for a host or on a per-host-adapter basis. Instructions for setting a unique scsi-initiator-id for each SCSI host adapter are included in Oracle Solaris Cluster 3.3 With SCSI JBOD Storage Device Manual.

Local Disks

Local disks are the disks that are only connected to a single Oracle Solaris host. Local disks are, therefore, not protected against host failure (they are not highly available). However, all disks, including local disks, are included in the global namespace and are configured as global devices. Therefore, the disks themselves are visible from all cluster hosts.

You can make the file systems on local disks available to other hosts by placing them under a global mount point. If the host that currently has one of these global file systems mounted fails, all hosts lose access to that file system. Using a volume manager lets you mirror these disks so that a failure cannot cause these file systems to become inaccessible, but volume managers do not protect against host failure.

See the section Global Devices for more information about global devices.

Removable Media

Removable media such as tape drives and CD-ROM drives are supported in a cluster. In general, you install, configure, and service these devices in the same way as in a nonclustered environment. These devices are configured as global devices in Oracle Solaris Cluster, so each device can be accessed from any node in the cluster. Refer to Oracle Solaris Cluster 3.3 Hardware Administration Manual for information about installing and configuring removable media.

See the section Global Devices for more information about global devices.

Cluster Interconnect

The cluster interconnect is the physical configuration of devices that is used to transfer cluster-private communications and data service communications between Oracle Solaris hosts in the cluster. Because the interconnect is used extensively for cluster-private communications, it can limit performance.

Only hosts in the cluster can be connected to the cluster interconnect. The Oracle Solaris Cluster security model assumes that only cluster hosts have physical access to the cluster interconnect.

You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability. If a single interconnect fails, moreover, the cluster is at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure.

The cluster interconnect consists of three hardware components: adapters, junctions, and cables. The following list describes each of these hardware components.

Figure 2-4 shows how the two hosts are connected by a transport adapter, cables, and a transport switch.

Figure 2-4 Cluster Interconnect

image:Illustration: Two hosts connected by a transport adapter, cables, and a transport switch

Public Network Interfaces

Clients connect to the cluster through the public network interfaces. Each network adapter card can connect to one or more public networks, depending on whether the card has multiple hardware interfaces.

You can set up Oracle Solaris hosts in the cluster to include multiple public network interface cards that perform the following functions:

If one of the adapters fails, IP network multipathing software is called to fail over the defective interface to another adapter in the group.

No special hardware considerations relate to clustering for the public network interfaces.

Client Systems

Client systems include machines or other hosts that access the cluster over the public network. Client-side programs use data or other services that are provided by server-side applications running on the cluster.

Client systems are not highly available. Data and applications on the cluster are highly available.

Console Access Devices

You must have console access to all Oracle Solaris hosts in the cluster.

To gain console access, use one of the following devices:

Only one supported terminal concentrator is available from Oracle and use of the supported Sun terminal concentrator is optional. The terminal concentrator enables access to /dev/console on each host by using a TCP/IP network. The result is console-level access for each host from a remote machine anywhere on the network.

The System Service Processor (SSP) provides console access for Sun Enterprise E1000 servers. The SSP is a processor card in a machine on an Ethernet network that is configured to support the Sun Enterprise E1000 server. The SSP is the administrative console for the Sun Enterprise E1000 server. Using the Sun Enterprise E10000 Network Console feature, any machine in the network can open a host console session.

Other console access methods include other terminal concentrators, tip serial port access from another host, and dumb terminals.


Caution

Caution - You can attach a keyboard or monitor to a cluster host provided that the keyboard and monitor are supported by the base server platform. However, you cannot use that keyboard or monitor as a console device. You must redirect the console to a serial port, or depending on your machine, to the System Service Processor (SSP) and Remote System Control (RSC) by setting the appropriate OpenBoot PROM parameter.


Administrative Console

You can use a dedicated machine, known as the administrative console, to administer the active cluster. Usually, you install and run administrative tool software, such as the Cluster Control Panel (CCP) and the Oracle Solaris Cluster module for the Sun Management Center product (for use with SPARC based clusters only), on the administrative console. Using cconsole under the CCP enables you to connect to more than one host console at a time. For more information about to use the CCP, see the Chapter 1, Introduction to Administering Oracle Solaris Cluster, in Oracle Solaris Cluster System Administration Guide.

The administrative console is not a cluster host. You use the administrative console for remote access to the cluster hosts, either over the public network, or optionally through a network-based terminal concentrator.

If your cluster consists of the Sun Enterprise E10000 platform, you must do the following:

Typically, you configure hosts without monitors. Then, you access the host's console through a telnet session from the administrative console. The administration console is connected to a terminal concentrator, and from the terminal concentrator to the host's serial port. In the case of a Sun Enterprise E1000 server, you connect from the System Service Processor. See Console Access Devices for more information.

Oracle Solaris Cluster does not require a dedicated administrative console, but using one provides these benefits: