Sun Cluster 3.0 Concepts

Cluster Nodes

A cluster node is a machine running both the Solaris operating environment and Sun Cluster software, and is either a current member of the cluster (a cluster member), or a potential member. The Sun Cluster software enables you to have from two to eight nodes in a cluster. See "Sun Cluster Topologies" for the supported node configurations.

Cluster nodes are generally attached to one or more multihost disks. One scalable services configuration allows nodes to service requests without being directly attached to multihost disks. The nodes not attached to multihost disks use the cluster file system to access the multihost disks.

In parallel database configurations, nodes share concurrent access to all the disks. See "Multihost Disks" and Chapter 3, Key Concepts - Administration and Application Development for more information on parallel database configurations.

All nodes in the cluster are grouped under a common name--the cluster name--which is used for accessing and managing the cluster.

Public network adapters attach nodes to the public networks, providing client access to the cluster.

Cluster members communicate with the other nodes in the cluster through one or more physically independent networks referred to as private networks. The set of private networks in the cluster is referred to as the cluster interconnect.

Every node in the cluster is aware when another node joins or leaves the cluster. Additionally, every node in the cluster is aware of the resources that are running locally as well as the resources that are running on the other cluster nodes.

Configure cluster members with resources (applications, disk storage, and so forth) in such a fashion as to provide failover and/or scalable capabilities.

Make sure that the nodes in the same cluster are of similar processing, memory, and I/O capability to enable failover to occur without significant degradation in performance. Because of the possibility of failover, ensure that every node has enough excess capacity to take on the workload of all nodes for which they are a backup or secondary.

Each node boots its own individual root (/) file system.

Software Components for Cluster Members

To function as a cluster member, the following software must be installed:

One exception is in an Oracle Parallel Server(OPS) configuration that uses hardware redundant array of independent disks (RAID). This configuration does not require a software volume manager such as Solstice DiskSuite or VERITAS Volume Manager to manage the Oracle data.

See the Sun Cluster 3.0 Installation Guide for information on how to install the Solaris operating environment, Sun Cluster, and volume management software. See the Sun Cluster 3.0 Data Services Installation and Configuration Guide for information on how to install and configure data services.

See Chapter 3, Key Concepts - Administration and Application Development for conceptual information on the preceding software components.

The following figures provides a high-level view of the software components that work together to create the Sun Cluster software environment.

Figure 2-2 High-Level Relationship of Sun Cluster Software Components

Graphic

See Chapter 4, Frequently Asked Questions for questions and answers about cluster members.