Sun Cluster Concepts Guide for Solaris OS

Chapter 2 Key Concepts for Hardware Service Providers

This chapter describes the key concepts that are related to the hardware components of a Sun Cluster configuration.

This chapter covers the following topics:

Sun Cluster System Hardware and Software Components

This information is directed primarily to hardware service providers. These concepts can help service providers understand the relationships between the hardware components before they install, configure, or service cluster hardware. Cluster system administrators might also find this information useful as background to installing, configuring, and administering cluster software.

A cluster is composed of several hardware components, including the following:

The Sun Cluster software enables you to combine these components into a variety of configurations. The following sections describe these configurations.

For an illustration of a sample two-node cluster configuration, see Sun Cluster Hardware Environment in Sun Cluster Overview for Solaris OS.

Cluster Nodes

A cluster node is a machine that is running both the Solaris Operating System and Sun Cluster software. A cluster node is either a current member of the cluster (a cluster member), or a potential member.

Cluster nodes are generally attached to one or more multihost devices. Nodes that are not attached to multihost devices use the cluster file system to access the multihost devices. For example, one scalable services configuration enables nodes to service requests without being directly attached to multihost devices.

In addition, nodes in parallel database configurations share concurrent access to all the disks.

All nodes in the cluster are grouped under a common name (the cluster name), which is used for accessing and managing the cluster.

Public network adapters attach nodes to the public networks, providing client access to the cluster.

Cluster members communicate with the other nodes in the cluster through one or more physically independent networks. This set of physically independent networks is referred to as the cluster interconnect.

Every node in the cluster is aware when another node joins or leaves the cluster. Additionally, every node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other cluster nodes.

Nodes in the same cluster should have similar processing, memory, and I/O capability to enable failover to occur without significant degradation in performance. Because of the possibility of failover, every node must have enough excess capacity to support the workload of all nodes for which they are a backup or secondary.

Each node boots its own individual root (/) file system.

Software Components for Cluster Hardware Members

To function as a cluster member, a node must have the following software installed:

The following figure provides a high-level view of the software components that work together to create the Sun Cluster environment.

Figure 2–1 High-Level Relationship of Sun Cluster Software Components

Illustration: The preceding context describes the graphic.

See Chapter 4, Frequently Asked Questions for questions and answers about cluster members.

Multihost Devices

Disks that can be connected to more than one node at a time are multihost devices. In the Sun Cluster environment, multihost storage makes disks highly available. Sun Cluster software requires multihost storage for two-node clusters to establish quorum. Greater than two-node clusters do not require quorum devices. For more information about quorum, see Quorum and Quorum Devices.

Multihost devices have the following characteristics.

A volume manager provides for mirrored or RAID-5 configurations for data redundancy of the multihost devices. Currently, Sun Cluster supports Solaris Volume Manager and VERITAS Volume Manager as volume managers, and the RDAC RAID-5 hardware controller on several hardware RAID platforms.

Combining multihost devices with disk mirroring and disk striping protects against both node failure and individual disk failure.

See Chapter 4, Frequently Asked Questions for questions and answers about multihost storage.

Multi-Initiator SCSI

This section applies only to SCSI storage devices and not to Fibre Channel storage used for the multihost devices.

In a standalone server, the server node controls the SCSI bus activities by way of the SCSI host adapter circuit that connects this server to a particular SCSI bus. This SCSI host adapter circuit is referred to as the SCSI initiator. This circuit initiates all bus activities for this SCSI bus. The default SCSI address of SCSI host adapters in Sun systems is 7.

Cluster configurations share storage between multiple server nodes, using multihost devices. When the cluster storage consists of single-ended or differential SCSI devices, the configuration is referred to as multi-initiator SCSI. As this terminology implies, more than one SCSI initiator exists on the SCSI bus.

The SCSI specification requires each device on a SCSI bus to have a unique SCSI address. (The host adapter is also a device on the SCSI bus.) The default hardware configuration in a multi-initiator environment results in a conflict because all SCSI host adapters default to 7.

To resolve this conflict, on each SCSI bus, leave one of the SCSI host adapters with the SCSI address of 7, and set the other host adapters to unused SCSI addresses. Proper planning dictates that these “unused” SCSI addresses include both currently and eventually unused addresses. An example of addresses unused in the future is the addition of storage by installing new drives into empty drive slots.

In most configurations, the available SCSI address for a second host adapter is 6.

You can change the selected SCSI addresses for these host adapters by using one of the following tools to set the scsi-initiator-id property:

You can set this property globally for a node or on a per-host-adapter basis. Instructions for setting a unique scsi-initiator-id for each SCSI host adapter are included in Sun Cluster 3.1 - 3.2 With SCSI JBOD Storage Device Manual for Solaris OS.

Local Disks

Local disks are the disks that are only connected to a single node. Local disks are, therefore, not protected against node failure (the are not highly available). However, all disks, including local disks, are included in the global namespace and are configured as global devices. Therefore, the disks themselves are visible from all cluster nodes.

You can make the file systems on local disks available to other nodes by placing them under a global mount point. If the node that currently has one of these global file systems mounted fails, all nodes lose access to that file system. Using a volume manager lets you mirror these disks so that a failure cannot cause these file systems to become inaccessible, but volume managers do not protect against node failure.

See the section Global Devices for more information about global devices.

Removable Media

Removable media such as tape drives and CD-ROM drives are supported in a cluster. In general, you install, configure, and service these devices in the same way as in a nonclustered environment. These devices are configured as global devices in Sun Cluster, so each device can be accessed from any node in the cluster. Refer to Sun Cluster 3.1 - 3.2 Hardware Administration Manual for Solaris OS for information about installing and configuring removable media.

See the section Global Devices for more information about global devices.

Cluster Interconnect

The cluster interconnect is the physical configuration of devices that is used to transfer cluster-private communications and data service communications between cluster nodes. Because the interconnect is used extensively for cluster-private communications, it can limit performance.

Only cluster nodes can be connected to the cluster interconnect. The Sun Cluster security model assumes that only cluster nodes have physical access to the cluster interconnect.

All nodes must be connected by the cluster interconnect through at least two redundant physically independent networks, or paths, to avoid a single point of failure. You can have several physically independent networks (two to six) between any two nodes.

The cluster interconnect consists of three hardware components: adapters, junctions, and cables. The following list describes each of these hardware components.

See Chapter 4, Frequently Asked Questions for questions and answers about the cluster interconnect.

Public Network Interfaces

Clients connect to the cluster through the public network interfaces. Each network adapter card can connect to one or more public networks, depending on whether the card has multiple hardware interfaces.

You can set up nodes to include multiple public network interface cards that perform the following functions:

If one of the adapters fails, Internet Protocol (IP) Network Multipathing software is called to fail over the defective interface to another adapter in the group.

No special hardware considerations relate to clustering for the public network interfaces.

See Chapter 4, Frequently Asked Questions for questions and answers about public networks.

Client Systems

Client systems include workstations or other servers that access the cluster over the public network. Client-side programs use data or other services that are provided by server-side applications running on the cluster.

Client systems are not highly available. Data and applications on the cluster are highly available.

See Chapter 4, Frequently Asked Questions for questions and answers about client systems.

Console Access Devices

You must have console access to all cluster nodes.

To gain console access, use one of the following devices:

Only one supported terminal concentrator is available from Sun and use of the supported Sun terminal concentrator is optional. The terminal concentrator enables access to /dev/console on each node by using a TCP/IP network. The result is console-level access for each node from a remote workstation anywhere on the network.

The System Service Processor (SSP) provides console access for Sun Enterprise E1000 servers. The SSP is a machine on an Ethernet network that is configured to support the Sun Enterprise E1000 server. The SSP is the administrative console for the Sun Enterprise E1000 server. Using the Sun Enterprise E10000 Network Console feature, any workstation in the network can open a host console session.

Other console access methods include other terminal concentrators, tip serial port access from another node and, dumb terminals. You can use Sun keyboards and monitors, or other serial port devices if your hardware service provider supports them.

Administrative Console

You can use a dedicated workstation, known as the administrative console, to administer the active cluster. Usually, you install and run administrative tool software, such as the Cluster Control Panel (CCP) and the Sun Cluster module for the Sun Management Center product (for use with SPARC based clusters only), on the administrative console. Using cconsole under the CCP enables you to connect to more than one node console at a time. For more information about to use the CCP, see the Chapter 1, Introduction to Administering Sun Cluster, in Sun Cluster System Administration Guide for Solaris OS.

The administrative console is not a cluster node. You use the administrative console for remote access to the cluster nodes, either over the public network, or optionally through a network-based terminal concentrator.

If your cluster consists of the Sun Enterprise E10000 platform, you must do the following:

Typically, you configure nodes without monitors. Then, you access the node's console through a telnet session from the administrative console. The administration console is connected to a terminal concentrator, and from the terminal concentrator to the node's serial port. In the case of a Sun Enterprise E1000 server, you connect from the System Service Processor. See Console Access Devices for more information.

Sun Cluster does not require a dedicated administrative console, but using one provides these benefits:

See Chapter 4, Frequently Asked Questions for questions and answers about the administrative console.

SPARC: Sun Cluster Topologies for SPARC

A topology is the connection scheme that connects the cluster nodes to the storage platforms that are used in a Sun Cluster environment. Sun Cluster software supports any topology that adheres to the following guidelines.

Sun Cluster software does not require you to configure a cluster by using specific topologies. The following topologies are described to provide the vocabulary to discuss a cluster's connection scheme. These topologies are typical connection schemes.

The following sections include sample diagrams of each topology.

SPARC: Clustered Pair Topology for SPARC

A clustered pair topology is two or more pairs of nodes that operate under a single cluster administrative framework. In this configuration, failover occurs only between a pair. However, all nodes are connected by the cluster interconnect and operate under Sun Cluster software control. You might use this topology to run a parallel database application on one pair and a failover or scalable application on another pair.

Using the cluster file system, you could also have a two-pair configuration. More than two nodes can run a scalable service or parallel database, even though all the nodes are not directly connected to the disks that store the application data.

The following figure illustrates a clustered pair configuration.

Figure 2–2 SPARC: Clustered Pair Topology

Illustration: The preceding context describes the graphic.

SPARC: Pair+N Topology for SPARC

The pair+N topology includes a pair of nodes that are directly connected to the following:

The following figure illustrates a pair+N topology where two of the four nodes (Node 3 and Node 4) use the cluster interconnect to access the storage. This configuration can be expanded to include additional nodes that do not have direct access to the shared storage.

Figure 2–3 Pair+N Topology

Illustration: The preceding context describes the graphic.

SPARC: N+1 (Star) Topology for SPARC

An N+1 topology includes some number of primary nodes and one secondary node. You do not have to configure the primary nodes and secondary node identically. The primary nodes actively provide application services. The secondary node need not be idle while waiting for a primary node to fail.

The secondary node is the only node in the configuration that is physically connected to all the multihost storage.

If a failure occurs on a primary node, Sun Cluster fails over the resources to the secondary node. The secondary node is where the resources function until they are switched back (either automatically or manually) to the primary node.

The secondary node must always have enough excess CPU capacity to handle the load if one of the primary nodes fails.

The following figure illustrates an N+1 configuration.

Figure 2–4 SPARC: N+1 Topology

Illustration: The preceding context describes the graphic.

SPARC: N*N (Scalable) Topology for SPARC

An N*N topology enables every shared storage device in the cluster to connect to every node in the cluster. This topology enables highly available applications to fail over from one node to another without service degradation. When failover occurs, the new node can access the storage device by using a local path instead of the private interconnect.

The following figure illustrates an N*N configuration.

Figure 2–5 SPARC: N*N Topology

Illustration: The preceding context describes the graphic.

x86: Sun Cluster Topologies for x86

A topology is the connection scheme that connects the cluster nodes to the storage platforms that are used in the cluster. Sun Cluster supports any topology that adheres to the following guidelines.

Sun Cluster does not require you to configure a cluster by using specific topologies. The following clustered pair topology, which is the only topology for clusters that are composed of x86 based nodes, is described to provide the vocabulary to discuss a cluster's connection scheme. This topology is a typical connection scheme.

The following section includes a sample diagram of the topology.

x86: Clustered Pair Topology for x86

A clustered pair topology is two nodes that operate under a single cluster administrative framework. In this configuration, failover occurs only between a pair. However, all nodes are connected by the cluster interconnect and operate under Sun Cluster software control. You might use this topology to run a parallel database or a failover or scalable application on the pair.

The following figure illustrates a clustered pair configuration.

Figure 2–6 x86: Clustered Pair Topology

Illustration: The preceding context describes the graphic.