1 Introduction to Oracle RAC

Provides an overview of Oracle Real Application Clusters (Oracle RAC) installation and administration, and various components and functions.

Note:

A multitenant Container Database (CDB) is the only supported architecture in Oracle Database 21c and later releases.

Overview of Oracle RAC

Learn about Oracle Real Application Clusters (Oracle RAC), and the differences in functionality between Oracle RAC and a single-instance Oracle Database.

Non-cluster Oracle Database instances have a one-to-one relationship between Oracle Database and the instance. Oracle RAC environments, however, have a one-to-many relationship between the database and instances. An Oracle RAC database can have several instances, all of which access one Oracle Database. All database instances must use the same interconnect, which can also be used by Oracle Clusterware.

Oracle RAC databases differ architecturally from a non-cluster Oracle Database, in that each Oracle RAC database instance also has:

  • At least one additional thread of redo for each instance
  • An instance-specific undo tablespace

The combined processing power of the multiple servers can provide greater throughput and Oracle RAC scalability than is available from a single server.

A cluster comprises multiple interconnected computers or servers that appear as if they are one server to end users and applications. The Oracle RAC option with Oracle Database enables you to cluster Oracle Database instances. Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.

Oracle Clusterware is a portable cluster management solution that is integrated with Oracle Database. Oracle Clusterware is a required component for using Oracle RAC that provides the infrastructure necessary to run Oracle RAC. Oracle Clusterware also manages resources, such as Virtual Internet Protocol (VIP) addresses, databases, listeners, services, and so on. In addition, Oracle Clusterware enables both non-cluster Oracle databases and Oracle RAC databases to use the Oracle high-availability infrastructure. Oracle Clusterware along with Oracle Automatic Storage Management (Oracle ASM) (the two together comprise the Oracle Grid Infrastructure) enables you to create a clustered pool of storage to be used by any combination of non cluster and Oracle RAC databases.

Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates. Starting with Oracle Database 19c, the integration of vendor clusterware with Oracle Clusterware is deprecated, and is not supported in Oracle Database 21c and later releases.

The following figure shows how Oracle RAC is the Oracle Database option that provides a single system image for multiple servers to access one Oracle database. In Oracle RAC, each Oracle instance must run on a separate server.

Figure 1-1 Oracle Database with Oracle RAC Architecture

Description of Figure 1-1 follows
Description of "Figure 1-1 Oracle Database with Oracle RAC Architecture"

Traditionally, an Oracle RAC environment is located in one data center. However, you can configure Oracle RAC on an Oracle Extended Cluster, which is an architecture that provides extremely fast recovery from a site failure and allows for all nodes, at all sites, to actively process transactions as part of a single database cluster. In an extended cluster, the nodes in the cluster are typically dispersed, geographically, such as between two fire cells, between two rooms or buildings, or between two different data centers or cities. For availability reasons, the data must be located at both sites, thus requiring the implementation of disk mirroring technology for storage.

If you choose to implement this architecture, then you must assess whether this architecture is a good solution for your business, especially considering distance, latency, and the degree of protection it provides. Oracle RAC on extended clusters provides higher availability than is possible with local Oracle RAC configurations. However, an extended cluster may not fulfill all of the disaster-recovery requirements of your organization. A feasible separation provides great protection for some disasters (for example, local power outage or server room flooding), but it cannot provide protection against all types of outages. For comprehensive protection against disasters—including protection against corruptions and regional disasters—Oracle recommends the use of Oracle Data Guard with Oracle RAC, as described in the Oracle Data Guard Concepts and Administration, and on the Maximum Availability Architecture (MAA) Web site.

Oracle RAC is a unique technology that provides high availability and scalability for all application types. The Oracle RAC infrastructure is also a key component for implementing the Oracle enterprise grid computing architecture. Having multiple instances access a single database prevents the server from being a single point of failure. Oracle RAC enables you to combine smaller commodity servers into a cluster to create scalable environments that support mission critical business applications. Applications that you deploy on Oracle RAC databases can operate without code changes.

Oracle RAC allows multiple instances running on different nodes to access the database. Oracle RAC Cache Fusion automatically ensures changes from multiple sessions running on different instances are coordinated. In Oracle Database 21c and later releases, these background processes that perform Oracle RAC Cache fusion functionality have been enhanced to handle irrecoverable errors. Depending on the cause, Oracle RAC can retry the operation or correct the error to prevent instance failure. This reduces the occurrence of instance failures and helps to prevent these workload impacts.

Overview of Oracle Multitenant with Oracle RAC

You can configure a multitenant Container Database (CDB) to use Oracle RAC.

You can make each PDB available on either every database instance of the Oracle RAC CDB or on a subset of instances. In either case, access to PDBs is regulated using dynamic database services. Applications uses these services to connect to a PDB, like they would connect to a single-instance non-CDB.

You can isolate PDBs to prevent certain operations from being performed on or within a particular PDB that may interfere with other PDBs sharing the same Oracle RAC database or instance. PDB isolation allows for greater consolidation.

If you create an Oracle RAC database as a CDB, and if you plug PDBs into the CDB, then by default a PDB is not started automatically on any instance. With the first dynamic database service assigned to the PDB (other than the default database service, which has the same name as the database name), the PDB is made available on those instances on which the service runs.

Regardless of whether a PDB is available on multiple instances of an Oracle RAC CDB, the CDB is typically managed by the services running on the PDB. You can manually enable PDB access on each instance by starting the PDB manually on that instance.

Overview of Installing Oracle RAC

Install Oracle Grid Infrastructure and Oracle Database software using Oracle Universal Installer, and create your database with Oracle Database Configuration Assistant (Oracle DBCA).

This ensures that your Oracle RAC environment has the optimal network configuration, database structure, and parameter settings for the environment that you selected.

Alternatively, you can install Oracle RAC using Fleet Patching and Provisioning, which offers all of the advantages of Oracle Universal Installer and Oracle DBCA previously specified. In addition, Fleet Patching and Provisioning allows for standardization and automation.

Understanding Compatibility in Oracle RAC Environments

As part of your deployment plan, review the release compatibility restrictions and guidelines for using different Oracle Database releases on Oracle Grid Infrastructure.

To run Oracle RAC in configurations with different releases of Oracle Database in the same cluster, you must first install Oracle Grid Infrastructure, which must be the same version, or higher, as the highest version of Oracle Database that you want to deploy in this cluster. For example, to run an Oracle RAC 19c database and an Oracle RAC 23ai database in the same cluster, you must install Oracle Grid Infrastructure 23ai. Contact My Oracle Support for more information about version compatibility in Oracle RAC environments.

Oracle RAC Database Installation

Learn how to install Oracle Real Application Clusters (Oracle RAC) and review the restrictions.

Note:

The policy-managed database deployment option is desupported in Oracle Database 23ai.

Before you install Oracle RAC, first install Oracle Grid Infrastructure. The release of Oracle Grid Infrastructure must be the same as or newer than the Oracle RAC release that you are installing. Oracle Universal Installer only enables you to deploy an Oracle Database home across the nodes in a cluster if you previously installed and configured Oracle Grid Infrastructure for the cluster. If Oracle Universal Installer does not give you an option to deploy the database home across all of the nodes in the cluster, then the server on which you are attempting to install Oracle RAC is not a cluster member node.

During installation, you must manually run Oracle DBCA to create an Oracle RAC or an Oracle RAC One Node database. In these cases, database creation is a two-step process. First, install the Oracle RAC software by running the Oracle Database installer. Then create and configure your Oracle RAC or Oracle RAC One Node database using Oracle DBCA.

Note:

Before you create a database, a default listener must be running in the Oracle Grid Infrastructure home. If there is no default listener in the Oracle Grid Infrastructure home, then Oracle DBCA returns an error instructing you to run NETCA from the Oracle Grid Infrastructure home to create a default listener.

The Oracle RAC software is distributed as part of the Oracle Database installation media. By default, the Oracle Database software installation process installs the Oracle RAC option when the installation process recognizes that you are performing the installation on a cluster. Oracle Universal Installer installs Oracle RAC into a directory structure referred to as the Oracle home, which is separate from the Oracle home directory for other Oracle software running on the system. Because Oracle Universal Installer is cluster aware, it installs the Oracle RAC software on all of the nodes that you defined to be part of the cluster.

Starting with Oracle Database 21c, the installation process creates a read-only Oracle home directory by default. You can use the read-only Oracle home as a software image to be shared across multiple database servers. This simplifies patching and mass rollout because only one Oracle home image needs to be updated to distribute a patch to multiple database servers. Read-only Oracle Database homes are more secure than traditional Oracle Database homes because there is a clear separation of configuration information from the actual software. Thus, there is no risk of new files being created inside the Oracle Database home from active processes.

Oracle RAC Database Creation

Part of Oracle Database deployment is the creation of the database.

You can create a database as part of the database deployment or you can only deploy the database software first, then create any database that is meant to run out of the newly created Oracle home by using Oracle DBCA.

Note:

The policy-managed database deployment option is desupported in Oracle Database 23ai.

In Oracle RAC environments, the values for DB_UNIQUE_NAME.DB_DOMAIN in its entirety must be unique for each database within your enterprise. The name of each Pluggable Database (PDB) should also be unique within the cluster.

By default, Oracle DBCA creates one service for your Oracle RAC installation. This is the default database service and you should not use this service for user connectivity. The default database service is typically identified using the combination of the DB_NAME and DB_DOMAIN initialization parameters: db_name.db_domain. The default service is available on all instances in Oracle RAC environments, unless the database is in restricted mode.

Note:

Oracle recommends that you reserve the default database service for maintenance operations and create dynamic database services for user or application connectivity as a post-database creation step, using either SRVCTL or Oracle Enterprise Manager. Oracle DBCA no longer offers a dynamic database service creation option for Oracle RAC databases. For Oracle RAC One Node databases, you must create at least one dynamic database service.

Overview of Extending Oracle RAC Clusters

To extend an Oracle RAC cluster, also known as cloning, and add nodes to your environment after your initial deployment, then you must to do this on multiple layers, considering the management style that you currently use in the cluster.

Oracle provides various means of extending Oracle RAC clusters. Choose from the following approaches to extend the current environment:

  • Fleet Patching and Provisioning to provision new Oracle RAC databases and other software
  • Cloning using cloning scripts
  • Adding nodes using the addnode.sh (addnode.bat on Windows) script

Both approaches apply, regardless of how you initially deployed your environment. Both approaches copy the Oracle software on to the node that you plan to add to the cluster. Software that is copied to the node includes the Oracle Grid Infrastructure software and the Oracle Database homes.

For Oracle Database homes, ensure that the database software is deployed on all of the nodes on which database instances can potentially run. In either case, first deploy Oracle Grid Infrastructure on all of the nodes that are to be part of the cluster.

Note:

Oracle cloning is not a replacement for cloning using Oracle Enterprise Manager as part of the Provisioning Pack. When you clone Oracle RAC using Oracle Enterprise Manager, the provisioning process includes a series of steps where details about the home that you want to capture, the location to which you want to deploy, and various other parameters are collected.

For new installations, or if you install only one Oracle RAC database, use the traditional automated and interactive installation methods, such as Oracle Universal Installer, Fleet Patching and Provisioning, or the Provisioning Pack feature of Oracle Enterprise Manager. To add or delete Oracle RAC from nodes in a cluster, use the procedures detailed in the Adding and Deleting Oracle RAC from Nodes... topics listed at the end of this topic.

The cloning process assumes that you successfully installed an Oracle Clusterware home and an Oracle home with Oracle RAC on at least one node. In addition, all root scripts must have run successfully on the node from which you are extending your cluster database.

See Also:

Oracle Enterprise Manager online help system for more information about the Provisioning Pack

Overview of Oracle Real Application Clusters One Node

Oracle Real Application Clusters One Node (Oracle RAC One Node) is an option to Oracle Database Enterprise Edition available since Oracle Database 11g release 2 (11.2).

Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in the cluster, only, under normal operations. This option adds to the flexibility that Oracle offers for database consolidation while reducing management overhead by providing a standard deployment for Oracle databases in the enterprise. Oracle RAC One Node database requires Oracle Grid Infrastructure and, therefore, requires the same hardware setup as an Oracle RAC database.

Oracle supports Oracle RAC One Node on all platforms on which Oracle RAC is certified. Similar to Oracle RAC, Oracle RAC One Node is certified on Oracle Virtual Machine (Oracle VM). Using Oracle RAC or Oracle RAC One Node with Oracle VM increases the benefits of Oracle VM with the high availability and scalability of Oracle RAC.

With Oracle RAC One Node, there is no limit to server scalability and, if applications grow to require more resources than a single node can supply, then you can upgrade your applications online to Oracle RAC. If the node that is running Oracle RAC One Node becomes overloaded, then you can relocate the instance to another node in the cluster. With Oracle RAC One Node you can use the Online Database Relocation feature to relocate the database instance with no downtime for application users. Alternatively, you can limit the CPU consumption of individual database instances per server within the cluster using Resource Manager Instance Caging and dynamically change this limit, if necessary, depending on the demand scenario.

Using the Single Client Access Name (SCAN) to connect to the database, clients can locate the service independently of the node on which it is running. Relocating an Oracle RAC One Node instance is therefore mostly transparent to the client, depending on the client connection. Oracle recommends to use either Application Continuity and Oracle Fast Application Notification or Transparent Application Failover to minimize the impact of a relocation on the client.

Oracle RAC One Node databases are administered slightly differently from Oracle RAC or non-cluster databases. For Oracle RAC One Node databases, you must monitor the candidate node list and make sure a server is always available for failover, if possible.

Note:

  • Oracle RAC One Node supports Transaction Guard and Application Continuity for failing clients over.
  • To prepare for all failure possibilities, you must add at least one Dynamic Database Service (Oracle Clusterware-managed database service) to an Oracle RAC One Node database.

Overview of Oracle Clusterware for Oracle RAC

Oracle Clusterware provides a complete, integrated clusterware management solution on all Oracle Database platforms.

Guidelines for Using Oracle Clusterware

The functionality provided by Oracle Clusterware provides all of the features that are required to manage cluster databases, including node membership, group services, global resource management, and high availability functions.

You can install Oracle Grid Infrastructure independently or as a prerequisite to installing Oracle RAC. Oracle Database features, such as services, use the underlying Oracle Clusterware mechanisms to provide advanced capabilities. Starting with Oracle Database 21c, third-party clusterware products are no longer supported with Oracle RAC.

Oracle Clusterware is designed for, and tightly integrated with, Oracle RAC. You can use Oracle Clusterware to manage high-availability operations in a cluster. When you create an Oracle RAC database using any of the management tools, the database is registered with and managed by Oracle Clusterware, along with the other required components such as the VIP address, the Single Client Access Name (SCAN) (which includes the SCAN VIPs and the SCAN listener), Oracle Notification Service, and the Oracle Net listeners. These resources automatically start when the node starts and automatically restart if they fail. The Oracle Clusterware daemons run on each node.

Anything that Oracle Clusterware manages is known as a CRS resource. A CRS resource can be a database, an instance, a Pluggable Database (PDB), a service, a listener, a VIP address, or an application process. Oracle Clusterware manages CRS resources based on a resource's configuration information that is stored in the Oracle Cluster Registry (OCR). You can use SRVCTL commands to administer any Oracle-defined CRS resources. Oracle Clusterware provides the framework that enables you to create CRS resources to manage any process running on servers in the cluster which are not predefined by Oracle. Oracle Clusterware stores the information that describes the configuration of these components in OCR that you can administer.

Overview of Temporary Tablespaces

Oracle uses temporary tablespaces to write spill-overs to non-shared tablespaces that are created on local disks.

It is still possible for SQL operations, such as hash aggregations, sorts, hash joins, creations of cursor-duration temporary tables for the WITH clause, and star transformations to spill over to disk. The spill overs go to the tablespaces on shared disks. Management of temporary tablespaces is similar to that of existing tablespaces.

Temporary tablespaces improve tablespace management by:

  • Storing temp files in private storage to take advantage of local storage I/O benefits.
  • Avoiding expensive cross-instance temporary tablespace management.
  • Increased addressability of temporary tablespace.
  • Improving instance warm-up performance by eliminating on-disk space metadata management.

Note:

You cannot use temporary tablespaces to store database objects, such as tables or indexes.
Parallel Execution Support for Cursor-Duration Temporary Tablespaces

The temporary tablespaces that are created using the WITH clause and star transformation exist in the temporary tablespace on shared disk.

A set of parallel query child processes loads the intermediate query results into these temporary tablespaces, which are then read by a different child processes. There is no restriction on how these child processes reading these results are allocated, because any parallel query child process on any instance can read the temporary tablespaces residing on the shared disk.

For read-write and read-only instance architecture, when the parallel query child processes load the intermediate results to the temporary tablespaces of these instances, the parallel query child processes of the instance where the intermediate results are stored share an affinity with the read operations for the intermediate results and can thus read them.

Temporary Tablespace Organization

Review the following information when organizing the temporary tablespaces.

For example:

CREATE TEMPORARY TABLESPACE temp_ts TEMPFILE\
   '/u01/app/oracle/database/23.4.0/dbs/temp_file'\
   EXTENT MANAGEMENT UNIFORM SIZE 1M AUTOEXTEND ON;
  • Creation of a temporary tablespace results in the creation of temporary files on every instance and not a single file.

  • You can create temporary tablespaces for both read-only and read-write instances.

    CREATE TEMPORARY TABLESPACE FOR ALL temp_ts TEMPFILE\
      ‘/u01/app/oracle/database/23.4.0/dbs/temp_file’\
      EXTENT MANAGEMENT UNIFORM SIZE 1M AUTOEXTEND ON;
Temporary Tablespace Hierarchy

When you define temporary tablespaces, there is a hierarchy that determines how the tablespaces are used.

There can be multiple temporary tablespaces in a database, such the default shared temporary tablespace for the database and multiple temporary tablespaces that are assigned to individual users. If a user has a shared temporary tablespace assigned, then that tablespace is used first, otherwise the database default temporary tablespace is used.

Once a tablespace has been selected for spilling during query processing, there is no switching to another tablespace. For example, if a user has a shared temporary tablespace assigned and during spilling it runs out of space, then there is no switching to an alternative tablespace. The spilling, in that case, results in an error. Additionally, remember that shared temporary tablespaces are shared among instances.

The allocation of temporary space for spilling to a temporary tablespace differs between read-only and read-write instances. For read-only instances, the following is the priority of selecting which temporary location to use for spills:

  1. Allocate from a user's temporary tablespace.
  2. Allocate from the database default temporary tablespace.

Note:

If there is no temporary tablespace in the database, then read-only instances spill to shared temporary tablespace.

For read-write instances, the priority of allocation differs from the preceding allocation order, because shared temporary tablespaces are given priority, allocating as follows from:

  1. A user’s shared temporary tablespace.
  2. The database default shared temporary tablespace.
Temporary Tablespace Features

Review the following information when using temporary tablespaces.

Instances cannot share temporary tablespace. Therefore, one instance cannot take temporary tablespace from another. If an instance runs out of temporary tablespace during spilling, then the statement results in an error.

  • Temporary tablespaces support only one BIGFILE per tablespace.

  • To address contention issues arising from having only one BIGFILE-based temporary tablespace, you can assign multiple temporary tablespaces to different users as the default.

  • A database administrator can specify the default temporary tablespace for a user using ALTER USER syntax. For example:

    ALTER USER MAYNARD TEMPORARY TABLESPACE temp_ts;
  • You can configure a user with two default temporary tablespaces:

    • One temporary tablespace when the user is connected to the read-only instance.

    • One shared temporary tablespace to be used when the same user is connected on the read-write instances that are running on a node.

Metadata Management of Temporary Files

Instance-specific information, such as bitmap for allocation, current size for a temporary file, and the file status, is stored in the SGA and not in control files because such information can vary across instances.

Currently, temporary file information (such as file name, creation size, creation SCN, temporary block size, and file status) is stored in the control file along with the initial and max files, as well as auto extent attributes. However, information about temporary files in the control file is common to all applicable instances.

When an instance starts, it reads the control file information and creates the temporary files that constitute the temporary tablespace for that instance. If there are two or more instances running on a node, then each instance has its own temporary files.

For temporary tablespaces, there is a separate file for each involved instance. The temporary file names follow a naming convention such that the instance numbers are appended to the temporary file names that are specified while creating the temporary tablespace.

For example, assume that a read-only node, N1, runs two Oracle read-only database instances with numbers 3 and 4. The following DDL command creates two files on node N1—/temp/temp_file_3 and /temp/temp_file_4, for instance 3 and 4 respectively:

CREATE TEMPORARY TABLESPACE temp_ts TEMPFILE  '/temp/temp_file'\
   EXTENT MANAGEMENT UNIFORM SIZE 1M AUTOEXTEND ON;

Assuming that there are two read-write instances, instances 1 and 2, and two read-only instances, instances 3 and 4, the following DDL command creates four files—/temp/temp_file_all_1 and /temp/temp_file_all_2 for instances 1 and 2, respectively, and /temp/temp_file_all_3 and /temp/temp_file_all_4 for instances 3 and 4, respectively:

CREATE TEMPORARY TABLESPACE FOR ALL  temp_ts_for_all TEMPFILE  '/temp/temp_file_all'\
   EXTENT MANAGEMENT UNIFORM SIZE 1M AUTOEXTEND ON;
DDL Support for Temporary Tablespaces

You can manage temporary tablespaces and temporary files with either the DDL command ALTER TABLESPACE, or ALTER DATABASE.

Run all temporary tablespace management and creation DDL commands from the read-write instances. Running all other DDL commands affects all instances in a homogeneous manner.

For example, the following command resizes the temporary files on all read-only instances:

ALTER TABLESPACE temp_ts RESIZE 1G;

For temporary tablespaces, Oracle supports the allocation options and their restrictions that are currently active for temporary files.

To run a DDL command on a temporary tablespace on a read-only instance, there must be at least one read-only instance in the cluster. This restriction is not applicable when creating or altering temporary tablespaces FOR ALL. You can assign a default temporary tablespace to the database with the clause DEFAULT TEMPORARY TABLESPACE appended to the command ALTER DATABASE.

For example:

ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp_ts;

You can specify default temporary tablespaces when creating the database, as follows:

CREATE DATABASE .. DEFAULT TEMPORARY TABLESPACE temp_ts_for_dbtemp_ts TEMPFILE\
   '/temp/temp_file_for_db' EXTENT MANAGEMENT UNIFORM SIZE 1M AUTOEXTEND ON;

You cannot specify default temporary tablespaces using the command CREATE DATABASE. When you create a database, its default temporary tablespace points to the default shared temporary tablespace. You must run the command ALTER DATABASE to assign an existing temporary tablespace as the default for the database.

Temporary Tablespace for Users

When you create a user without explicitly specifying shared or temporary tablespace, the user inherits shared and temporary tablespace from the corresponding default database tablespaces.

You can specify default temporary tablespace for a user, as follows:

CREATE USER new_user IDENTIFIED BY new_user TEMPORARY TABLESPACE temp_ts_for_all;

You can change the temporary tablespace for a user using the ALTER USER command, as follows:

ALTER USER maynard TEMPORARY TABLESPACE temp_ts;

As previously mentioned, default user temporary tablespace can be shared temporary space. Consider the following items for the ALTER USER...TEMPORARY TABLESPACE command:

  • You can change the user default temporary tablespace to any existing temporary tablespace.

  • If you want to set the user default temporary tablespace to a shared temporary tablespace, T, then T must be the same as the default shared temporary tablespace.

  • If a default user temporary tablespace points to a shared temporary tablespace, then, when you change the default shared temporary tablespace of the user, you also change the default temporary tablespace to that tablespace.

Following are some examples of temporary space management using the command ALTER:

  • To take a temporary tablespace offline:

    ALTER DATABASE TEMPFILE ‘/temp/temp_file’ OFFLINE;
  • To decrease the size of a temporary tablespace:

    ALTER TABLESPACE temp_ts SHRINK SPACE KEEP 20M
  • To change the auto-extension attributes of a temporary file:

    ALTER TABLESPACE temp_ts AUTOEXTEND ON NEXT 20G
  • To resize a temporary file:

    ALTER TABLESPACE temp_ts RESIZE 10G

    Note:

    When you resize a temporary file, it applies to individual files.

Some read-only instances may be down when you run the preceding commands. This does not prevent the commands from succeeding because when a read-only instance starts up later, it creates the temporary files based on information in the control file. Creation is fast because Oracle Database reformats only the header block of the temporary file, recording information about the file size, among other things. If you cannot create the temporary files, then the read-only instance stays down. Commands that you submitted from read-write instances are replayed immediately on all open, read-only instances.

Atomicity Requirement for Commands

All of the commands that you run from read-write instances are performed in an atomic manner.

This means that the command succeeds only when it succeeds on all live instances.

Temporary Tablespace and Dictionary Views

Oracle extended dictionary views can display information about temporary tablespaces.

Note the following changes to data dictionary views:

  • All of the diagnosibility information that is related to temporary tablespaces and temporary files that are exposed through AWR, SQL monitor, and other utilities, is also available for temporary tablespaces and temporary files. This information is available with the existing dictionary views for temporary tablespaces and temporary files: DBA_TEMP_FILES, DBA_TEMP_FREE_SPACE.

  • The USER_TABLESPACES and DBA_TABLESPACES dictionary view are extended by a column called SHARED, that indicates if the temporary file is shared.

  • The DBA_TEMP_FILES dictionary view is extended by two columns: SHARED and INST_ID. The SHARED column indicates if the temp file is shared. The INST_ID column contains the instance number. For shared temporary files, there is a single row per file and the INST_ID is null. For temporary files, this column contains information about temporary files for each instance, such as the size of the file in bytes (BYTES column).

  • The DBA_TEMP_FREE_SPACE dictionary view is extended by two columns: SHARED and INST_ID. The SHARED column indicates of the temporary file is shared. The INST_ID column contains the instance number. For shared temporary files, there is a single row for each file and the INST_ID is null. For temporary files, this column contains information about temporary files for each instance, such as the total free space available (FREE_SPACE column).

  • In the dictionary views, such as DBA_TABLESPACES, Oracle distinguishes the type of the tablespace using the SHARED column with the following values:

    • SHARED: for shared temporary tablespace

    • LOCAL_ON_ALL: for temporary tablespace on all of the instances

    Note:

    Currently, spills onto temporary tablespace for queries, such as sort and hash join spills, are automatically encrypted. This is also true for spills to temporary tablespace.

Overview of Oracle RAC Architecture and Processing

Installing Oracle RAC requires software, a network, and a storage configuration.

Oracle RAC requires several components:
  • Oracle Clusterware for concurrent access to the same storage and the same set of data files from all of the nodes in a cluster
  • A communications protocol for enabling interprocess communication (IPC) across all of the nodes in a cluster
  • Multiple database instances that process data as if the data resided on a logically combined, single cache
  • A mechanism for monitoring and communicating the statuses of the nodes in a cluster

Understanding Cluster-Aware Storage Solutions

Learn about the Oracle Real Application Clusters shared everything architecture and what shared everything means for your storage options.

An Oracle RAC database is a shared everything database. All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks. This enables all of the cluster database instances to access these storage components. Because Oracle RAC databases use a shared everything architecture, Oracle RAC requires cluster-aware storage for all of the database files.

In Oracle RAC, Oracle Database manages the disk access and is certified for use on a variety of storage architectures. It is your choice how to configure your storage, but you must use a supported cluster-aware storage solution. Oracle Database provides the following Oracl RAC storage options:

  • Oracle Automatic Storage Management (Oracle ASM)

    Oracle recommends this solution to manage your storage.

  • A certified cluster file system

    • Oracle recommends Oracle Advanced Cluster File System (Oracle ACFS).

    • A third-party cluster file system on a cluster-aware volume manager that is certified for Oracle RAC. For example:

      • Oracle OCFS2 (Linux only)

      • IBM GPFS (IBM AIX only)

  • Certified network file system (NFS) solution

Oracle RAC and Network Connectivity

All nodes in Oracle RAC environments must connect to at least one Local Area Network (LAN). This network is commonly referred to as the public network, and it enables users and applications to access the database.

In addition to the public network, Oracle RAC requires private network connectivity that is used exclusively for communication among the nodes and database instances that are running on those nodes. This network is commonly referred to as the interconnect.

The interconnect network is a private network that connects all of the servers in a cluster. The interconnect network must use at least one switch and a Gigabit Ethernet adapter.

Note:

  • Oracle supports interfaces with higher bandwidth but does not support using crossover cables with the interconnect.

  • Do not use the interconnect, the private network, for user communication, because Cache Fusion uses the interconnect for interinstance communication.

You can configure Oracle RAC to use either the User Datagram Protocol (UDP) or Reliable Data Socket (RDS) protocols for inter-instance communication on the interconnect. Oracle Clusterware uses the same interconnect using the UDP protocol, but cannot be configured to use RDS.

An additional network connectivity is required when using Network Attached Storage (NAS). Network attached storage can be typical NAS devices, such as NFS filers, or it can be storage that is connected using Fibre Channel over IP, for example. This additional network communication channel should be independent of the other communication channels that Oracle RAC uses (the public and private network communication). If the storage network communication must be converged with one of the other network communication channels, then ensure that storage-related communication gets first priority.

Overview of Using Dynamic Database Services to Connect to Oracle Databases

Applications should use the Dynamic Database Services feature to connect to an Oracle database over the public network.

Dynamic Database Services enable you to define rules and characteristics to control how users and applications connect to database instances. These characteristics include a unique name, workload balancing and failover options, and high availability characteristics.

Users can access an Oracle RAC database using a client/server configuration or through one or more middle tiers, with or without connection pooling. By default, a user connection to an Oracle RAC database is established using the TCP/IP protocol but Oracle supports other protocols. Oracle RAC database instances must be accessed through the SCAN for the cluster.

Overview of Virtual IP Addresses

Node VIPs are virtual IP (VIP) addresses that clients use to connect to Oracle RAC databases.

Oracle Clusterware hosts the node VIP addresses on a public network. The process for a typical connection attempt from a database client to an Oracle RAC database instance can be summarized as follows:

  1. The database client connects to SCAN (which includes a SCAN VIP on a public network), providing the SCAN listener with a valid service name.

  2. The SCAN listener then determines which database instance hosts this service and routes the client to the local or node listener on the respective node.

  3. The node listener, listening on a node VIP and a given port, retrieves the connection request and connects the client to an instance on the local node.

If multiple public networks are used on the cluster to support client connectivity through multiple subnets, then the preceding operation is performed within a given subnet.

If a node fails, then the VIP address fails over to another node on which the VIP address can accept TCP connections, but it does not accept connections to the Oracle database. Clients that attempt to connect to a VIP address that does not reside on its home node receive a rapid connection refused error instead of waiting for TCP connect timeout messages. When the network on which the VIP is configured comes back online, Oracle Clusterware fails back the VIP to its home node, where connections are accepted. Generally, VIP addresses fail over when:

  • The node on which a VIP address runs fails
  • All interfaces for the VIP address fail
  • All interfaces for the VIP address are disconnected from the network

Oracle RAC supports multiple public networks to enable access to the cluster through different subnets. Each network resource represents its own subnet and each database service uses a particular network to access the Oracle RAC database. Each network resource is a resource that is managed by Oracle Clusterware, which enables the VIP behavior previously described.

SCAN is a single network name that is defined either in your organization's Domain Name Server (DNS) or in the Grid Naming Service (GNS) that operates in a round robin order among three IP addresses. Oracle recommends that all of the connections to the Oracle RAC database use the SCAN in their client connection string. Incoming connections are load balanced across the active instances providing the requested service through the three SCAN listeners. With SCAN, you do not have to change the client connection even if the configuration of the cluster changes (nodes added or removed). SCAN fully supports multiple subnets. This means that you can create one SCAN for each subnet in which you want your cluster to operate.

Restricted Service Registration in Oracle RAC

The valid node checking feature provides the ability to configure and dynamically update a set of IP addresses or subnets from which registration requests are allowed by the listener.

Database instance registration with a listener succeeds only when the request originates from a valid node. The network administrator can specify a list of valid nodes, excluded nodes, or disable valid node checking altogether. The list of valid nodes explicitly lists the nodes and subnets that can register with the database. The list of excluded nodes explicitly lists the nodes that cannot register with the database. The control of dynamic registration results in increased manageability and security of Oracle RAC deployments.

By default, the SCAN listener agent sets REMOTE_ADDRESS_REGISTRATION_listener_name to a private IP endpoint. The SCAN listener accepts registration requests only from the private network. Remote nodes that are not accessible to the private network of the SCAN listener must be included in the list of valid nodes by using the registration_invited_nodes_alias parameter in the listener.ora file, or by modifying the SCAN listener using the command-line interface, SRVCTL.

Note:

Starting with Oracle Grid Infrastructure 12c, for a SCAN listener, if the VALID_NODE_CHECKING_REGISTRATION_listener_name and REGISTRATION_INVITED_NODES_listener_name parameters are set in the listener.ora file, then the listener agent overwrites these parameters.

If you use the SRVCTL utility to set the invitednodes and invitedsubnets values, then the listener agent automatically sets VALID_NODE_CHECKING_REGISTRATION_listener_name to SUBNET and sets REGISTRATION_INVITED_NODES_listener_name to the specified list in the listener.ora file.

For other listeners managed by CRS, the listener agent sets VALID_NODE_CHECKING_REGISTRATION_listener_name to be SUBNET in the listener.ora file only if it is not already set in the listener.ora file. The SRVCTL utility does not support setting the invitednodes and invitedsubnets values for a non-SCAN listener. The listener agent does not update REGISTRATION_INVITED_NODES_listener_name in the listener.ora file for a non SCAN listener.

About Oracle RAC Software Components

Oracle RAC databases generally have two or more database instances that each contain memory structures and background processes.

An Oracle RAC database has the same processes and memory structures as single-instance Oracle databases. Oracle RAC also has additional processes and memory structures that are specific to Oracle RAC. Any one instance's database view is nearly identical to any other instance's view in the same Oracle RAC environment; the view is a single-system image of the environment.

Each instance has a buffer cache in its System Global Area (SGA). Using Cache Fusion, Oracle RAC environments logically combine each instance's buffer cache to enable the instances to process data as if the data resided on a logically combined, single cache.

Note:

  • The In-Memory Transaction Manager integrates with the Cache Fusion protocol.

  • The SGA size requirements for Oracle RAC are greater than the SGA requirements for noncluster Oracle databases due to Cache Fusion.

To ensure that each Oracle RAC database instance obtains the block that it requires to satisfy a query or transaction, Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances, which effectively increases the size of the SGA for an Oracle RAC instance.

After one instance caches data, any other instance within the same cluster database can acquire a block image from another instance in the same database faster than by reading the block from disk. Therefore, Cache Fusion moves current blocks between instances rather than re-reading the blocks from disk. When a consistent block is needed or a changed block is required on another instance, Cache Fusion transfers the block image directly between the affected instances. Oracle RAC uses the private interconnect for interinstance communication and block transfers. The GES Monitor and the Instance Enqueue Process manage access to Cache Fusion resources and enqueue recovery processing.

Cache Fusion monitors the latency on the private networks and the service time on the disks, and automatically chooses the best path. If shared disks include low latency SSDs, then Oracle automatically chooses the best path.

About Oracle RAC Background Processes

The global cache service (GCS) and the global enqueue service (GES) processes, along with the global resource directory (GRD) collaborate to enable Cache Fusion.

The Oracle RAC processes and their identifiers are as follows:

  • ACMS: Atomic Controlfile to Memory Service (ACMS)

    In Oracle RAC environments, the ACMS process on each instance is an agent that contributes to ensuring a distributed SGA memory update is either globally committed on success or globally terminated if a failure occurs.

  • BG00: Background Process

    The ora_bg is a standard background process in Oracle RAC environments that performs various maintenance and monitoring tasks. It is responsible for buffer cache management, log writer (LGWR) process support, checkpoint coordination, background process monitoring, and statistics gathering.

  • GCW: Global Conflict Resolution (GCR) Monitor Process (LMHB)

    The ora_gcw is an infrastructure helper processes for LMHB. It monitors the LMHB process for stall and restarts, if necessary.

  • GTX0-j: Global Transaction Process

    The GTX0-j process provides transparent support for XA global transactions in Oracle RAC environments. The database autotunes the number of these processes based on the workload of XA global transactions.

  • LMON: Global Enqueue Service Monitor

    The LMON process monitors global enqueues and resources across the cluster and performs global enqueue recovery operations.

  • LMD: Global Enqueue Service Daemon

    The LMD process manages incoming remote resource requests within each instance.

  • LMS: Global Cache Service Process

    The LMS process maintains records of the data file statuses and each cached block by recording information in the global resource directory (GRD). The LMS process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances. This processing is part of Cache Fusion.

  • LCK0: Instance Enqueue Process

    The LCK0 process manages non-Cache Fusion resource requests such as library and row cache requests.

  • RMSn: Oracle RAC Management Processes (RMSn)

    The RMSn processes perform manageability tasks for Oracle RAC. Tasks that are accomplished by an RMSn process include the creation of resources that are related to Oracle RAC when new instances are added to the clusters.

  • RSMN: Remote Slave Monitor manages background secondary process creation and communication on remote instances. These background secondary processes perform tasks on behalf of a coordinating process running in another instance.

  • SCMN: Statistics Collection and Management

    SCMN is the main thread for a multithreaded process in a threaded Oracle RAC architecture. All other threads are spawned within the processes through SCMN based on the requests in the instance. It is an idle main thread, which waits for any requests, especially spawn threads, and takes care of them along with some periodic maintenance operations.

  • VOSD: Virtual Operating System Daemon

    The VOSD process executes time bound Oracle Database service actions. VOSD is spawned at instance startup and it is responsible for executing system service actions critical for the database.

Note:

Many of the Oracle Database components that this section describes are in addition to the components that are described for single-instance Oracle databases in Oracle Database Concepts.

Related Topics

Overview of Automatic Workload Management with Dynamic Database Services

Services represent groups of applications with common attributes, service level thresholds, and priorities.

Application functions can be divided into workloads that are identified by services. For example, Oracle E-Business Suite can define a service for each responsibility, such as general ledger, accounts receivable, order entry, and so on. A service can span one or more Oracle Database instances, or multiple databases in a global cluster. A single instance can support multiple services. The number of instances that are serving a service is transparent to the application. Services provide a single system image to manage competing applications, and to enable each workload to be managed as a unit.

Middle tier applications and clients select a service by specifying the service name as part of the connection in the TNS connect string. For example, data sources for Oracle WebLogic Server are set to route to a service. Using Net Easy*Connection, this connection comprises only the service name and network address, as follows: user_name/password@SCAN/service_name.

Server-side work, such as Oracle Scheduler, Parallel Query, and Oracle GoldenGate queues, set the service name as part of the workload definition. For Oracle Scheduler, jobs are assigned to job classes, and job classes run within services. For Parallel Query and Parallel DML, the query coordinator connects to a service, and the parallel query workers inherit the service for the duration of the parallel processing. For Oracle GoldenGate, streams queues are accessed using services. Work running under a service inherits the thresholds and attributes for the service and is measured as part of the service.

Oracle Database Resource Manager binds services to consumer groups and priorities. Binding services by groups and priorities enables the database to manage the services in the order of their importance. For example, the DBA can define separate services for high priority online users, and lower priority for internal reporting applications. Likewise, the DBA can define Gold, Silver and Bronze services to prioritize the order in which requests are serviced for the same application. When planning the services for a system, the plan should include the priority of each service relative to the other services. In this way, Oracle Database Resource Manager can satisfy the priority-one services first, followed by the priority-two services, and so on.

When users or applications connect to a database, Oracle recommends that you use a service that is specified in the CONNECT_DATA portion of the connect string. Oracle Database automatically creates one database service when the database is created but the behavior of this service is different from that of database services that you subsequently create. To enable more flexibility in the management of a workload that uses the database, Oracle Database enables you to create multiple services and specify on which instances the services start. If you are interested in greater workload management flexibility, then continue reading this chapter to understand the added features that you can use with services.

Note:

The features discussed in this chapter do not work with the following default database services: DB_NAME, DB_UNIQUE_NAME, PDB_NAME, SYS$BACKGROUND, and SYS$USERS. Oracle strongly recommends that you not use these services for applications to connect to the database. You must create cluster managed services to take advantage of these features. You can only manage the services that you create. Any service that a database creates is automatically managed by the database server.

Dynamic Database Services

Dynamic database services enable you to manage workload distributions to provide optimal performance for users and applications. Dynamic database services offer the following:

  • Services: Oracle Database provides a powerful automatic workload management facility, called services, to enable the enterprise grid vision. Services are entities that you can define in Oracle Real Application Clusters (Oracle RAC) databases that enable you to group database workloads, route work to the optimal instances that are assigned to offer the service, and achieve high availability for planned and unplanned actions.

  • High Availability Framework: An Oracle RAC component that enables Oracle Database to always maintain components in a running state.

  • Fast Application Notification (FAN): Provides information to Oracle RAC applications and clients about cluster state changes and Load Balancing Advisory events, such as UP and DOWN events for instances, services, or nodes. FAN has two methods for publishing events to clients, the Oracle Notification Service daemon, which is used by Java Database Connectivity (JDBC) clients including the Oracle Application Server, and Oracle GoldenGate Advanced Queueing, which is only used by previous releases of Oracle Call Interface (OCI) and Oracle Data Provider for .NET (ODP.NET) clients.

    Note:

    All Oracle Database clients use Oracle Notification Service.

  • Transaction Guard: Provides a protocol and an API for at-most-once running of transactions in case of unplanned outages and duplicate submissions.

  • Application Continuity: Provides a general purpose infrastructure that replays an in-flight request when a recoverable error is received, masking many system, communication, and storage outages, and hardware failures. Unlike existing recovery technologies, this feature attempts to recover the transactional and non-transactional session states beneath the application, so that the outage appears to the application as a delayed processing.

  • Connection Load Balancing: An Oracle Net Services feature that balances incoming connections across all of the instances that provide the requested database service.

  • Load Balancing Advisory: Provides information to applications about the current service levels that the database and its instances provide. The load balancing advisory makes recommendations to applications about where to direct application requests to obtain the best service based on the management policy that you have defined for that service. Load balancing advisory events are published through Oracle Notification Service.

  • Automatic Workload Repository (AWR): Tracks service-level statistics as metrics. You can create server-generated alerts for these metrics when the statistics exceed or fail to meet certain thresholds.

  • Fast Connection Failover (FCF): Enables Oracle Clients to provide rapid failover of connections by subscribing to FAN events.

  • Runtime Connection Load Balancing: Enables Oracle Clients to provide intelligent allocations of connections in the connection pool, based on the current service level provided by the database instances when applications request a connection to complete some work.

  • Single Client Access Name (SCAN): Provides a single name to clients that connect to Oracle RAC that do not change throughout the life of a cluster, even if you add or remove nodes from the cluster. Clients connecting with SCAN can use a straightforward connection string, such as a thin JDBC URL or EZConnect, and still achieve load balancing and client connection failover goals.

You can deploy Oracle RAC and non-cluster Oracle Database environments to use dynamic database service features in many different ways. Depending on the number of nodes and your environment complexity and objectives, your choices for optimal automatic workload management and high-availability configuration depend on several considerations, which are explained in the Automatic Workload Management topics.

Overview of Blocker Resolver

Blocker Resolver is an Oracle Database feature that automatically detects and resolves system delays.

Note:

Starting with Oracle Database Release 23ai, Oracle Quality of Service Management (QoSM, or QoS Management) is desupported.

Blocker Resolver initially identified system delays and then dumped the relevant information about the delay into a trace file. In Oracle Database 12c release 2 (12.2) and later releases, Blocker Resolver can take action on and attempt to resolve the system delay. Blocker Resolver also runs in both single-instance and Oracle RAC database instances.

Blocker Resolver functions, as follows:

  • First detects a system delay and then analyzes the delay and verifies the cause of the delay. It then applies heuristics to decide on a course of action to resolve the delay.

  • Automates the tasks that used to require manual steps by a DBA to provide the trace files to My Oracle Support so that someone there could identify the source of the delay, minimizing or eliminating database and application downtime.

  • Periodically scans all processes and analyzes a smaller subset of processes that are holding resources in successive scans. Blocker Resolver ignores processes if there is nothing waiting on the resource.

  • Considers cross-instance delays, which are delays where the holder is a database process waiting on a response from an Oracle ASM instance.

  • Terminates the holder process so the next process waiting on that resource can move on and prevent a delay.

  • Notifies a DBA with an ORA-32701 error message in the alert log.

Overview of Database In-Memory and Oracle RAC

Every Oracle RAC node has its own In-Memory (IM) column store. By default, populated objects are distributed across all IM column stores in the cluster.

Oracle recommends that you size the IM column stores equally on every Oracle RAC node. If an Oracle RAC node does not require an IM column store, then set the INMEMORY_SIZE parameter to 0.

Starting with Oracle Database 19c, Database In-Memory has a new Base Level feature that allows you to use Database In-Memory with up to a 16 GB column store without requiring the Database In-Memory option. In an Oracle RAC database, the INMEMORY_SIZE setting in each database instance must not exceed 16 GB. Set the INMEMORY_FORCE parameter to BASE_LEVEL to enable this feature.

It is possible to have completely different objects populated on every node, or to have larger objects distributed across all of the IM column stores in the cluster. On Oracle Engineered Systems, it is also possible for the same objects to appear in the IM column store on every node. The distribution of objects across the IM column stores in a cluster is controlled by two subclauses to the INMEMORY attribute: DISTRIBUTE and DUPLICATE.

In an Oracle RAC environment, an object that only has the INMEMORY attribute specified is automatically distributed across the IM column stores in the cluster. You can use the DISTRIBUTE clause to specify how an object is distributed across the cluster. By default, the type of partitioning used (if any) determines how the object is distributed. If the object is not partitioned, then it is distributed by rowid range. Alternatively, you can specify the DISTRIBUTE clause to override the default behavior.

On an Oracle Engineered System, you can duplicate or mirror populated objects across the IM column stores in the cluster. This technique provides the highest level of redundancy. The DUPLICATE clause controls how an object is duplicated. If you specify only DUPLICATE, then one mirrored copy of the data is distributed across the IM column stores in the cluster. To duplicate the entire object in each IM column store, specify DUPLICATE ALL.

Note:

When you deploy Oracle RAC on a non-Engineered System, the DUPLICATE clause is treated as NO DUPLICATE.

Overview of Managing Oracle RAC Environments

When managing Oracle RAC, there are many considerations, such as the deployment type, the tools to use, how to monitor the system, and how to evaluate performance.

About Designing and Deploying Oracle RAC Environments

Any enterprise that is designing and implementing a high availability strategy with Oracle RAC must begin by performing a thorough analysis of the business drivers that require high availability.

An analysis of business requirements for high availability combined with an understanding of the level of investment required to implement different high availability solutions enables the development of a high availability architecture that achieves both business and technical objectives.

See Also:

For help choosing and implementing the architecture that best fits your availability requirements:

  • "Design and Deployment Techniques" provides a high-level overview you can use to evaluate the high availability requirements of your business.

  • Oracle Database High Availability Overview and Best Practices describes how to select the most suitable architecture for your organization, describes several high availability architectures, and provides guidelines for choosing the one that best meets your requirements, and also provides information about the Oracle Maximum Availability Architecture

About Administrative Tools for Oracle RAC Environments

You administer a cluster database as a single-system image using the Server Control Utility (SRVCTL), Oracle Enterprise Manager, SQL*Plus, and other utilities.

  • Server Control Utility (SRVCTL): SRVCTL is a command-line interface that you can use to manage an Oracle RAC database from a single point. You can use SRVCTL to start and stop the database and instances and to delete or move instances and services. You can also use SRVCTL to manage configuration information, Oracle Real Application Clusters One Node (Oracle RAC One Node), Oracle Clusterware, and Oracle ASM.

  • Oracle Fleet Patching and Provisioning (Oracle FPP): Use Oracle Fleet Patching and Provisioning to patch, upgrade, and provision Oracle RAC databases.

  • Oracle Enterprise Manager: Oracle Enterprise Manager Cloud Control GUI interface for managing both noncluster database and Oracle RAC database environments. Oracle recommends that you use Oracle Enterprise Manager to perform administrative tasks whenever feasible.

    You can use Oracle Enterprise Manager Cloud Control to also manage Oracle RAC One Node databases.

  • SQL*Plus: SQL*Plus commands operate on the current instance. The current instance can be either the local default instance on which you initiated your SQL*Plus session, or it can be a remote instance to which you connect with Oracle Net Services.

  • Configuration Verification Utility (CVU): CVU is a command-line tool that you can use to verify a range of cluster and Oracle RAC components, such as shared storage devices, networking configurations, system requirements, and Oracle Clusterware, in addition to operating system groups and users. You can use CVU for preinstallation checks and for postinstallation checks of your cluster environment. CVU is especially useful during preinstallation and during installation of Oracle Grid Infrastructure and Oracle RAC components. Oracle Universal Installer runs CVU after installing Oracle Clusterware and Oracle Database to verify your environment.

    Install and use CVU before you install Oracle RAC to ensure that your configuration meets the minimum Oracle RAC installation requirements. Also, use CVU for verifying the completion of ongoing administrative tasks, such as node addition and node deletion.

    Note:

    Starting with Oracle Grid Infrastructure 23ai, Cluster Verification Utility (CVU) is renamed to Configuration Verification Utility (CVU).
  • Oracle DBCA: The recommended utility for creating and initially configuring Oracle RAC, Oracle RAC One Node, and Oracle noncluster databases.

  • NETCA: Configures the network for your Oracle RAC environment.

See Also:

  • "Administering Database Instances and Cluster Databases" for an introduction to Oracle RAC administration using SRVCTL, Oracle Enterprise Manager, and SQL*Plus

  • "Monitoring Oracle RAC and Oracle Clusterware"

  • "Server Control Utility Reference" for SRVCTL reference information

  • Oracle Clusterware Administration and Deployment Guide for information about the Configuration Verification Utility (CVU), in addition to other Oracle Clusterware tools, such as the OIFCFG tool for allocating and deallocating network interfaces and the OCRCONFIG command-line tool for managing OCR

  • Oracle Database Net Services Administrator's Guide for more information about NETCA

About Monitoring Oracle RAC Environments

Web-based Oracle Enterprise Manager Cloud Control enables you to monitor an Oracle RAC database.

Oracle Enterprise Manager Cloud Control is a central point of control for the Oracle environment that you access by way of a graphical user interface (GUI). See "Monitoring Oracle RAC and Oracle Clusterware" for more information about using Oracle Enterprise Manager to monitor Oracle RAC environments.

Also, note the following recommendations about monitoring Oracle RAC environments:

  • Use Oracle Enterprise Manager Cloud Control to initiate cluster database management tasks.

  • Use Oracle Enterprise Manager Cloud Control to administer multiple or individual Oracle RAC databases.

  • Use the global views (GV$ views), which are based on V$ views. The catclustdb.sql script creates the GV$ views. Run this script if you do not create your database with Oracle DBCA. Otherwise, Oracle DBCA runs this script for you.

    For almost every V$ view, there is a corresponding global GV$ view. In addition to the V$ information, each GV$ view contains an extra column named INST_ID, which displays the instance number from which the associated V$ view information was obtained.

  • Use the sophisticated management and monitoring features of the Oracle Database Diagnostic and Tuning packs within Oracle Enterprise Manager that include the Automatic Database Diagnostic Monitor (ADDM) and Automatic Workload Repository (AWR).

    Note:

    Although Statspack is available for backward compatibility, Statspack provides reporting only. You must run Statspack at level 7 to collect statistics related to block contention and segment block waits.

About Evaluating Performance in Oracle RAC Environments

You do not need to perform special tuning for Oracle RAC; Oracle RAC scales without special configuration changes.

If your application performs well on a noncluster Oracle database, then it will perform well in an Oracle RAC environment. Many of the tuning tasks that you would perform on a noncluster Oracle database can also improve Oracle RAC database performance. This is especially true if your environment requires scalability across a greater number of CPUs.

Some of the performance features specific to Oracle RAC include:

  • Dynamic resource allocation

    • Oracle Database dynamically allocates Cache Fusion resources as needed

    • The dynamic acquiring of resources improves performance by keeping resources local to data blocks

  • Cache Fusion enables a simplified tuning methodology

    • You do not have to tune any parameters for Cache Fusion

    • No application-level tuning is necessary

    • You can use a bottom-up tuning approach with virtually no effect on your existing applications

  • More detailed performance statistics

    • More views for Oracle RAC performance monitoring

    • Oracle RAC-specific performance views in Oracle Enterprise Manager