Oracle9i Real Application Clusters Concepts
Release 1 (9.0.1)

Part Number A89867-02
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback

Go to previous page Go to next page

3
Cluster Hardware Architecture

This chapter describes the hardware components and various high-level architectural models that typify cluster environments. It explains the basic hardware for nodes as well as the hardware that unites the individual nodes into a cluster.

Topics in this chapter include:

Overview of Cluster Hardware Components

A cluster comprises two or more nodes that are linked by an interconnect. The interconnect serves as the communication path between the nodes in the cluster. The Oracle instances use the interconnect for the messaging required to synchronize each instance's manipulation of the shared data. The shared data that the nodes access resides in storage devices.

The architectural model you select to deploy your Real Application Clusters application depends on your processing goals. This chapter describes cluster components in more detail.

Node Components

A node has these main components:

You can purchase these components in several configurations. The arrangement of these components determines how each node in a cluster accesses memory and storage.

Memory, Interconnect, and Storage

All clusters use CPUs in more or less the same way. However, you can configure the remaining components, memory, storage, and the interconnect in different ways for different purposes.

Memory Access

Multiple CPUs are typically configured to share main memory. This enables you to create a single computer system that delivers scalable performance. This type of system is also less expensive to build than a single CPU with equivalent processing power. A computer with a single CPU is known as a uniprocessor.

There are two configurations of shared memory systems discussed in this section:

Uniform Memory Access (UMA)

In uniform memory access (UMA) configurations, all processors can access main memory at the same speed. In this configuration, memory access is uniform. This configuration is also known as a Symmetric Multi-Processor (SMP) system as illustrated in Figure 3-1.

Non-Uniform Memory Access (NUMA)

non-uniform memory access (NUMA), means that all processors have access to all memory structures. However, the memory accesses are not equal. In other words, the access cost varies depending on what parts of memory each processor accesses. In NUMA configurations, the cost of accessing a specific location in main memory is different for some of the CPUs relative to the others.

Performance of UMA Versus NUMA

Performance in both UMA and NUMA systems is limited by memory bus bandwidth. This means that as you add CPUs to the system beyond a certain point, performance will not increase linearly. The point where adding CPUs results in minimal performance improvement varies by application type and system architecture. Typically SMP configurations do not scale well beyond 24 to 64 processors.

Figure 3-1 Tightly Coupled Shared Memory System or UMA


Text description of sps81134.gif follows
Text description of the illustration sps81134.gif

Advantages of Shared Memory

The cluster database processing advantages of shared memory systems are:

A disadvantage of shared memory systems for cluster database processing is that scalability is limited by the bandwidth and latency of the bus and by available memory.

The High Speed IPC Interconnect

The high speed interprocess communication (IPC) interconnect is a high bandwidth, low latency communication facility that connects each node to the other nodes in the cluster. The high speed interconnect routes messages and other cluster database processing-specific traffic among the nodes to coordinate each node's access to data and to data-dependent resources.

Real Application Clusters also makes use of user-mode IPC and memory-mapped IPC. These substantially reduce CPU consumption and reduce IPC latency.

You can use Ethernet, a Fiber Distributed Data Interface (FDDI), or some other proprietary hardware for your interconnect. Also consider installing a backup interconnect in case your primary interconnect fails. The backup interconnect enhances high availability and reduces the likelihood of the interconnect becoming a single point-of-failure.

Storage Access in Clustered Systems

Clustered systems use several storage access models. Each model uses a particular resource sharing scheme that is best used for a particular purpose.

The type of storage access Real Application Clusters uses is independent of the type of memory access it uses. For example, a cluster of SMP nodes can be configured with either uniform or non-uniform disk subsystems.

Real Application Clusters uses two storage access models discussed in these sections:

Uniform Disk Access

In uniform disk access systems, or shared disk systems, as shown in Figure 3-2, the cost of disk access is the same for all nodes.

Figure 3-2 Uniform Access Shared Disk System


Text description of sps81023.gif follows
Text description of the illustration sps81023.gif

The cluster in Figure 3-2 is composed of multiple SMP nodes. Shared disk subsystems like this are most often implemented by using shared SCSI or Fibre Channel connections to a large number of disks.

Fibre Channel is a generic term for a high speed serial data transfer architecture recently standardized by the American National Standards Institute (ANSI). The Fibre Channel architecture was developed by a consortium of computer and mass storage manufacturers.

Advantages of Uniform Disk Access

The advantages of using cluster database processing on shared disk systems with uniform access are:

Non-Uniform Disk Access

In some systems, disk storage is attached to only one node. For that node, the access is local. For all other nodes, a request for disk access or data must be forwarded by a software virtual disk layer over the interconnect to the node where the disk is locally attached. This means that the cost of a disk read or write varies significantly depending on whether the access is local or remote. The costs associated with reading or writing the blocks from the remote disks, including the interconnect latency and the IPC overhead, all contribute to the increased cost of this type of operation versus the cost of the same type of operation by using a uniform disk access configuration.

MPP Systems and Resource Affinity

Non-uniform disk access configurations are commonly used on systems known as shared nothing or Massively Parallel Processing (MPP) systems. If a node fails on a high availability system, then you can usually reconfigure local disks to be local to another node. For such non-uniform disk access systems, Real Application Clusters requires that the virtual disk layer be provided at the system level. In some cases it is much more efficient to move work to the node where the disk or other I/O device is locally attached rather than to use remote requests. This ability to collocate processing with storage is known as resource affinity. It is used by Oracle in a variety of areas including parallel execution and backup.

Figure 3-3 illustrates a shared nothing system:

Figure 3-3 Non-Uniform Disk Access


Text description of sps81028.gif follows
Text description of the illustration sps81028.gif

Advantages of Non-Uniform Disk Access

The advantages of using cluster database processing on MPP or non-uniform disk access systems are:

Clusters: Nodes and the Interconnect

As described previously, to operate Real Application Clusters you must use either a uniprocessor, SMP (UMA), or NUMA memory configuration. When configured with an interconnect, two or more of these types of processors make up a cluster. The performance of a clustered system can be limited by a number of factors. These include various system components such as the memory bandwidth, CPU-to-CPU communication bandwidth, the memory available on the system, the I/O bandwidth, and the interconnect bandwidth.

Interoperability with Other Systems

Real Application Clusters is supported on a wide range of clustered systems from a number of different vendors. The number of nodes in a cluster that Real Application Clusters can support is significantly greater than any known implementation. For a small system configured primarily for high availability, there might only be two nodes in the cluster. A large configuration, however, might have 40 to 50 nodes in the cluster. In general, the cost of managing a cluster is related to the number of nodes in the system. The trend has been toward using a smaller number of nodes with each node configured with a large SMP system that uses shared disks.


Go to previous page Go to next page
Oracle
Copyright © 1996-2001, Oracle Corporation.

All Rights Reserved.
Go To Documentation Library
Home
Go To Product List
Book List
Go To Table Of Contents
Contents
Go To Index
Index

Master Index

Feedback