|Oracle9i Real Application Clusters Concepts
Release 2 (9.2)
Part Number A96597-01
This chapter describes the system components and architectural models that typify most cluster database environments. It describes the hardware for nodes along with the hardware and software that unites the nodes into a cluster database. Topics in this chapter include:
A cluster database comprises two or more nodes that are linked by an interconnect. The interconnect serves as the communication path between each node in the cluster database. Each Oracle instance uses the interconnect for the messaging that synchronizes each instance's use of shared resources. Oracle also uses the interconnect to transmit data blocks that the multiple instances share. The primary type of shared resource is the datafiles that all the nodes access. Figure 2-1 is a high-level view of how the interconnect links the nodes in a cluster database and how the cluster accesses the shared datafiles that are on storage devices.
The cluster and its interconnect are linked to the storage devices, or shared disk subsystem, by a storage area network. The following sections describe the nodes and the interconnect in more detail:
A node has the following main components:
You can purchase these components in many configurations. Their arrangement determines how the nodes access memory and storage.
Oracle Corporation recommends that you deploy Real Application Clusters with configurations that have been certified for use with Real Application Clusters databases.
Real Application Clusters uses a high-speed interprocess communication (IPC) component for internode communications. The IPC defines the protocols and interfaces required for Real Application Clusters environments to transfer messages between instances. Messages are the fundamental units of communication in this interface. The core IPC functionality is built on an asynchronous, queued messaging model.
All cluster databases use CPUs in generally the same way. However, you can deploy different configurations of memory, storage, and the interconnect for different purposes. The architecture on which you deploy Real Application Clusters depends on your processing goals.
Each node in a cluster database has one or more CPUs. Nodes with multiple CPUs are typically configured to share main memory. This enables you to deploy a scalable system.
The high-speed interprocess communication (IPC) interconnect is a high-bandwidth, low latency communication facility that links the nodes in the cluster. The interconnect routes messages and other cluster communications traffic to coordinate each node's access to resources.
You can use Ethernet, a Fiber Distributed Data Interface (FDDI), or other proprietary hardware for your interconnect. Also consider installing a backup interconnect in case your primary interconnect fails. The backup interconnect enhances high availability and reduces the likelihood of the interconnect becoming a single point-of-failure.
Real Application Clusters supports user-mode and memory-mapped IPCs. These types of IPCs substantially reduce CPU consumption and IPC latency.
Real Application Clusters requires that all nodes have simultaneous access to the shared disks to give the instances concurrent access to the database. The implementation of the shared disk subsystem is based on your operating system: you can use either a cluster file system or place the files on raw devices. Cluster file systems greatly simplify the installation and administration of Real Application Clusters.
Memory access configurations for Real Application Clusters are typically uniform. This means that the overhead for each node in the cluster to access memory is the same. However, typical storage access configurations are both uniform and non-uniform. The storage access configuration that you use is independent of your memory configuration.
As for memory configurations, most systems also use uniform disk access for Real Application Clusters databases. Uniform disk access configurations in a cluster database simplify disk access administration.