1 Introduction to Oracle RAC

Provides an overview of Oracle Real Application Clusters (Oracle RAC) installation and administration, and various components and functions.

Note:

A multitenant container database is the only supported architecture in Oracle Database 21c. While the documentation is being revised, legacy terminology may persist. In most cases, "database" and "non-CDB" refer to a CDB or PDB, depending on context. In some contexts, such as upgrades, "non-CDB" refers to a non-CDB from a previous release.

Overview of Oracle RAC

Learn about Oracle Real Application Clusters (Oracle RAC), and the differences in functionality between Oracle RAC and a single-instance Oracle Database.

Non-cluster Oracle Database instances have a one-to-one relationship between Oracle Database and the instance. Oracle RAC environments, however, have a one-to-many relationship between the database and instances. An Oracle RAC database can have several instances, all of which access one Oracle Database. All database instances must use the same interconnect, which can also be used by Oracle Clusterware.

Oracle RAC databases differ architecturally from a non-cluster Oracle Database, in that each Oracle RAC database instance also has:

  • At least one additional thread of redo for each instance
  • An instance-specific undo tablespace

The combined processing power of the multiple servers can provide greater throughput and Oracle RAC scalability than is available from a single server.

A cluster comprises multiple interconnected computers or servers that appear as if they are one server to end users and applications. The Oracle RAC option with Oracle Database enables you to cluster Oracle Database instances. Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.

Oracle Clusterware is a portable cluster management solution that is integrated with Oracle Database. Oracle Clusterware is a required component for using Oracle RAC that provides the infrastructure necessary to run Oracle RAC. Oracle Clusterware also manages resources, such as Virtual Internet Protocol (VIP) addresses, databases, listeners, services, and so on. In addition, Oracle Clusterware enables both non-cluster Oracle databases and Oracle RAC databases to use the Oracle high-availability infrastructure. Oracle Clusterware along with Oracle Automatic Storage Management (Oracle ASM) (the two together comprise the Oracle Grid Infrastructure) enables you to create a clustered pool of storage to be used by any combination of non cluster and Oracle RAC databases.

Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates. Starting with Oracle Database 19c, the integration of vendor clusterware with Oracle Clusterware is deprecated, and is not supported in Oracle Database 21c.

The following figure shows how Oracle RAC is the Oracle Database option that provides a single system image for multiple servers to access one Oracle database. In Oracle RAC, each Oracle instance must run on a separate server.

Figure 1-1 Oracle Database with Oracle RAC Architecture

Description of Figure 1-1 follows
Description of "Figure 1-1 Oracle Database with Oracle RAC Architecture"

Traditionally, an Oracle RAC environment is located in one data center. However, you can configure Oracle RAC on an Oracle Extended Cluster, which is an architecture that provides extremely fast recovery from a site failure and allows for all nodes, at all sites, to actively process transactions as part of a single database cluster. In an extended cluster, the nodes in the cluster are typically dispersed, geographically, such as between two fire cells, between two rooms or buildings, or between two different data centers or cities. For availability reasons, the data must be located at both sites, thus requiring the implementation of disk mirroring technology for storage.

If you choose to implement this architecture, then you must assess whether this architecture is a good solution for your business, especially considering distance, latency, and the degree of protection it provides. Oracle RAC on extended clusters provides higher availability than is possible with local Oracle RAC configurations. However, an extended cluster may not fulfill all of the disaster-recovery requirements of your organization. A feasible separation provides great protection for some disasters (for example, local power outage or server room flooding), but it cannot provide protection against all types of outages. For comprehensive protection against disasters—including protection against corruptions and regional disasters—Oracle recommends the use of Oracle Data Guard with Oracle RAC, as described in the Oracle Data Guard Concepts and Administration, and on the Maximum Availability Architecture (MAA) Web site.

Oracle RAC is a unique technology that provides high availability and scalability for all application types. The Oracle RAC infrastructure is also a key component for implementing the Oracle enterprise grid computing architecture. Having multiple instances access a single database prevents the server from being a single point of failure. Oracle RAC enables you to combine smaller commodity servers into a cluster to create scalable environments that support mission critical business applications. Applications that you deploy on Oracle RAC databases can operate without code changes.

Oracle RAC allows multiple instances running on different nodes to access the database. Oracle RAC Cache Fusion automatically ensures changes from multiple sessions running on different instances are coordinated. In Oracle Database 21c, these background processes that perform Oracle RAC Cache fusion functionality have been enhanced to handle fatal errors. Depending on the cause, Oracle RAC can retry the operation or correct the error to prevent instance failure. This reduces the occurrence of instance failures and helps to prevent these workload impacts.

Overview of Oracle Multitenant with Oracle RAC

Oracle Multitenant is an option with Oracle Database 12c that simplifies consolidation, provisioning, upgrades, and more.

It is based on an architecture that allows a multitenant container database (CDB) to hold several pluggable databases (PDBs). You can adopt an existing database as a PDB without having to change the application tier. In this architecture, Oracle RAC provides the local high availability that is required when consolidating various business-critical applications on one system.

When using PDBs with Oracle RAC, the multitenant CDB is based on Oracle RAC. You can make each PDB available on either every instance of the Oracle RAC CDB or a subset of instances. In either case, access to and management of the PDBs are regulated using dynamic database services, which will also be used by applications to connect to the respective PDB, as they would in a single instance Oracle database using Oracle Net Services for connectivity.

You can isolate PDBs to prevent certain operations from being performed on or within a particular PDB that may interfere with other PDBs sharing the same Oracle RAC database or database instance. PDB isolation allows for a higher degree of consolidation using Oracle Multitenant.

If you create an Oracle RAC database as a CDB and plug one or more PDBs into the CDB, then, by default, a PDB is not started automatically on any instance of the Oracle RAC CDB. With the first dynamic database service assigned to the PDB (other than the default database service which has the same name as the database name), the PDB is made available on those instances on which the service runs.

Whether a PDB is available on more than one instance of an Oracle RAC CDB, the CDB is typically managed by the services running on the PDB. You can manually enable PDB access on each instance of an Oracle RAC CDB by starting the PDB manually on that instance.

Overview of Installing Oracle RAC

Install Oracle Grid Infrastructure and Oracle Database software using Oracle Universal Installer, and create your database with Database Configuration Assistant (DBCA).

This ensures that your Oracle RAC environment has the optimal network configuration, database structure, and parameter settings for the environment that you selected.

Alternatively, you can install Oracle RAC using Fleet Patching and Provisioning, which offers all of the advantages of Oracle Universal Installer and DBCA previously specified. In addition, Fleet Patching and Provisioning allows for standardization and automation.

Understanding Compatibility in Oracle RAC Environments

As part of your deployment plan, review the release compatibility restrictions and guidelines for using different Oracle Database releases on Oracle Grid Infrastructure.

To run Oracle RAC in configurations with different releases of Oracle Database in the same cluster, you must first install Oracle Grid Infrastructure, which must be the same version, or higher, as the highest version of Oracle Database that you want to deploy in this cluster. For example, to run an Oracle RAC 18c database and an Oracle RAC 21c database in the same cluster, you must install Oracle Grid Infrastructure 21c. Contact My Oracle Support for more information about version compatibility in Oracle RAC environments.

Oracle RAC Database Installation

Learn how to install Oracle Real Application Clusters (Oracle RAC) and review the restrictions.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

Before you install Oracle RAC, first install Oracle Grid Infrastructure. The release of Oracle Grid Infrastructure must be the same as or newer than the Oracle RAC release that you are installing. Oracle Universal Installer only enables you to deploy an Oracle Database home across the nodes in a cluster if you previously installed and configured Oracle Grid Infrastructure for the cluster. If Oracle Universal Installer does not give you an option to deploy the database home across all of the nodes in the cluster, then the server on which you are attempting to install Oracle RAC is not a cluster member node.

During installation, you must manually run DBCA to create an Oracle RAC or an Oracle RAC One Node database. In these cases, database creation is a two-step process. First, install the Oracle RAC software by running the Oracle Database installer. Then create and configure your Oracle RAC or Oracle RAC One Node database using DBCA.

Note:

Before you create a database, a default listener must be running in the Oracle Grid Infrastructure home. If there is no default listener in the Oracle Grid Infrastructure home, then DBCA returns an error instructing you to run NETCA from the Oracle Grid Infrastructure home to create a default listener.

The Oracle RAC software is distributed as part of the Oracle Database installation media. By default, the Oracle Database software installation process installs the Oracle RAC option when the installation process recognizes that you are performing the installation on a cluster. Oracle Universal Installer installs Oracle RAC into a directory structure referred to as the Oracle home, which is separate from the Oracle home directory for other Oracle software running on the system. Because Oracle Universal Installer is cluster aware, it installs the Oracle RAC software on all of the nodes that you defined to be part of the cluster.

Starting with Oracle Database 21c, the installation process creates a read-only Oracle home directory by default. You can use the read-only Oracle home as a software image to be shared across multiple database servers. This simplifies patching and mass rollout because only one Oracle home image needs to be updated to distribute a patch to multiple database servers. Read-only Oracle Database homes are more secure than traditional Oracle Database homes because there is a clear separation of configuration information from the actual software. Thus, there is no risk of new files being created inside the Oracle Database home from active processes.

Oracle RAC Database Creation

Part of Oracle Database deployment is the creation of the database.

You can create a database as part of the database deployment or you can only deploy the database software first, then create any database that is meant to run out of the newly created Oracle home by using DBCA.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

In Oracle RAC environments, the values for DB_UNIQUE_NAME.DB_DOMAIN in its entirety must be unique for each database within your enterprise. The name of each pluggable database (PDB) should also be unique within the cluster.

By default, DBCA creates one service for your Oracle RAC installation. This is the default database service and you should not use this service for user connectivity. The default database service is typically identified using the combination of the DB_NAME and DB_DOMAIN initialization parameters: db_name.db_domain. The default service is available on all instances in Oracle RAC environments, unless the database is in restricted mode.

Note:

Oracle recommends that you reserve the default database service for maintenance operations and create dynamic database services for user or application connectivity as a post-database creation step, using either SRVCTL or Oracle Enterprise Manager. DBCA no longer offers a dynamic database service creation option for Oracle RAC databases. For Oracle RAC One Node databases, you must create at least one dynamic database service.

Overview of Extending Oracle RAC Clusters

To extend an Oracle RAC cluster, also known as cloning, and add nodes to your environment after your initial deployment, then you must to do this on multiple layers, considering the management style that you currently use in the cluster.

Oracle provides various means of extending Oracle RAC clusters. Choose from the following approaches to extend the current environment:

  • Fleet Patching and Provisioning to provision new Oracle RAC databases and other software
  • Cloning using cloning scripts
  • Adding nodes using the addnode.sh (addnode.bat on Windows) script

Both approaches apply, regardless of how you initially deployed your environment. Both approaches copy the Oracle software on to the node that you plan to add to the cluster. Software that is copied to the node includes the Oracle Grid Infrastructure software and the Oracle Database homes.

For Oracle Database homes, ensure that the database software is deployed on all of the nodes on which database instances can potentially run. In either case, first deploy Oracle Grid Infrastructure on all of the nodes that are to be part of the cluster.

Note:

Oracle cloning is not a replacement for cloning using Oracle Enterprise Manager as part of the Provisioning Pack. When you clone Oracle RAC using Oracle Enterprise Manager, the provisioning process includes a series of steps where details about the home that you want to capture, the location to which you want to deploy, and various other parameters are collected.

For new installations, or if you install only one Oracle RAC database, use the traditional automated and interactive installation methods, such as Oracle Universal Installer, Fleet Patching and Provisioning, or the Provisioning Pack feature of Oracle Enterprise Manager. To add or delete Oracle RAC from nodes in a cluster, use the procedures detailed in the Adding and Deleting Oracle RAC from Nodes... topics listed at the end of this topic.

The cloning process assumes that you successfully installed an Oracle Clusterware home and an Oracle home with Oracle RAC on at least one node. In addition, all root scripts must have run successfully on the node from which you are extending your cluster database.

See Also:

Oracle Enterprise Manager online help system for more information about the Provisioning Pack

Overview of Oracle Real Application Clusters One Node

Oracle Real Application Clusters One Node (Oracle RAC One Node) is an option to Oracle Database Enterprise Edition available since Oracle Database 11g release 2 (11.2).

Oracle RAC One Node is a single instance of an Oracle RAC-enabled database running on one node in the cluster, only, under normal operations. This option adds to the flexibility that Oracle offers for database consolidation while reducing management overhead by providing a standard deployment for Oracle databases in the enterprise. Oracle RAC One Node database requires Oracle Grid Infrastructure and, therefore, requires the same hardware setup as an Oracle RAC database.

Oracle supports Oracle RAC One Node on all platforms on which Oracle RAC is certified. Similar to Oracle RAC, Oracle RAC One Node is certified on Oracle Virtual Machine (Oracle VM). Using Oracle RAC or Oracle RAC One Node with Oracle VM increases the benefits of Oracle VM with the high availability and scalability of Oracle RAC.

With Oracle RAC One Node, there is no limit to server scalability and, if applications grow to require more resources than a single node can supply, then you can upgrade your applications online to Oracle RAC. If the node that is running Oracle RAC One Node becomes overloaded, then you can relocate the instance to another node in the cluster. With Oracle RAC One Node you can use the Online Database Relocation feature to relocate the database instance with no downtime for application users. Alternatively, you can limit the CPU consumption of individual database instances per server within the cluster using Resource Manager Instance Caging and dynamically change this limit, if necessary, depending on the demand scenario.

Using the Single Client Access Name (SCAN) to connect to the database, clients can locate the service independently of the node on which it is running. Relocating an Oracle RAC One Node instance is therefore mostly transparent to the client, depending on the client connection. Oracle recommends to use either Application Continuity and Oracle Fast Application Notification or Transparent Application Failover to minimize the impact of a relocation on the client.

Oracle RAC One Node databases are administered slightly differently from Oracle RAC or non-cluster databases. For Oracle RAC One Node databases, you must monitor the candidate node list and make sure a server is always available for failover, if possible.

Note:

  • Oracle RAC One Node supports Transaction Guard and Application Continuity for failing clients over.
  • To prepare for all failure possibilities, you must add at least one Dynamic Database Service (Oracle Clusterware-managed database service) to an Oracle RAC One Node database.

Overview of Oracle Clusterware for Oracle RAC

Oracle Clusterware provides a complete, integrated clusterware management solution on all Oracle Database platforms.

Guidelines for Using Oracle Clusterware

The functionality provided by Oracle Clusterware provides all of the features that are required to manage cluster databases, including node membership, group services, global resource management, and high availability functions.

You can install Oracle Clusterware independently or as a prerequisite to installing Oracle RAC. Oracle Database features, such as services, use the underlying Oracle Clusterware mechanisms to provide advanced capabilities. Starting with Oracle Database 21c, third-party clusterware products are no longer supported with Oracle RAC.

Oracle Clusterware is designed for, and tightly integrated with, Oracle RAC. You can use Oracle Clusterware to manage high-availability operations in a cluster. When you create an Oracle RAC database using any of the management tools, the database is registered with and managed by Oracle Clusterware, along with the other required components such as the VIP address, the Single Client Access Name (SCAN) (which includes the SCAN VIPs and the SCAN listener), Oracle Notification Service, and the Oracle Net listeners. These resources automatically start when the node starts and automatically restart if they fail. The Oracle Clusterware daemons run on each node.

Anything that Oracle Clusterware manages is known as a CRS resource. A CRS resource can be a database, an instance, a pluggable database (PDB), a service, a listener, a VIP address, or an application process. Oracle Clusterware manages CRS resources based on a resource's configuration information that is stored in the Oracle Cluster Registry (OCR). You can use SRVCTL commands to administer any Oracle-defined CRS resources. Oracle Clusterware provides the framework that enables you to create CRS resources to manage any process running on servers in the cluster which are not predefined by Oracle. Oracle Clusterware stores the information that describes the configuration of these components in OCR that you can administer.

Overview of Reader Nodes

Reader nodes are instances of Oracle RAC databases that provide read-only access, primarily for reporting and analytical purposes.

The advantage of read-only instances is that they do not suffer performance impacts like normal (read/write) database instances do during cluster reconfigurations, for example, when a node is undergoing maintenance or suffers a failure.

You can create services to direct queries to read-only instances running on reader nodes. These services can use parallel query to further speed up performance. Oracle recommends that you size the memory in these reader nodes as high as possible so that parallel queries can use the memory for best performance.

While it is possible for a reader node to host a writable database instance, Oracle recommends that reader nodes be dedicated to hosting read-only instances to achieve the best performance.

Overview of Local Temporary Tablespaces

Oracle uses local temporary tablespaces to write spill-overs to local, non-shared, temporary tablespaces that are created on local disks on reader nodes.

It is still possible for SQL operations, such as hash aggregations, sorts, hash joins, creations of cursor-duration temporary tables for the WITH clause, and star transformations to spill over to disk. The spill overs go to the global temporary tablespaces on shared disks. Management of local temporary tablespaces is similar to that of existing temporary tablespaces.

Local temporary tablespaces improve temporary tablespace management in read-only instances by:

  • Storing temp files in reader node private storage to take advantage of local storage I/O benefits.
  • Avoiding expensive cross-instance temporary tablespace management.
  • Increased addressability of temporary tablespace.
  • Improving instance warm-up performance by eliminating on-disk space metadata management.

Note:

You cannot use local temporary tablespaces to store database objects, such as tables or indexes. This same restriction is also true for space for Oracle global temporary tables.

This section includes the following topics:

Parallel Execution Support for Cursor-Duration Temporary Tablespaces

The temporary tablespaces that are created using the WITH clause and star transformation exist in the temporary tablespace on shared disk. A set of parallel query child processes loads the intermediate query results into these temporary tablespaces, which are then read by a different child processes. There is no restriction on how these child processes reading these results are allocated, because any parallel query child process on any instance can read the temporary tablespaces residing on the shared disk.

For read-write and read-only instance architecture, when the parallel query child processes load the intermediate results to the local temporary tablespaces of these instances, the parallel query child processes of the instance where the intermediate results are stored share an affinity with the read operations for the intermediate results and can thus read them.

Local Temporary Tablespace Organization

Review the following information when organizing the local temporary tablespaces.

For example:

CREATE LOCAL TEMPORARY TABLESPACE local_temp_ts TEMPFILE\
   '/u01/app/oracle/database/21.0.0/dbs/temp_file'\
   EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M AUTOEXTEND ON;
  • Creation of a local temporary tablespace results in the creation of local temporary files on every instance and not a single file, as is currently true for shared global temporary tablespaces.

  • You can create local temporary tablespaces for both read-only and read-write instances.

    CREATE LOCAL TEMPORARY TABLESPACE FOR ALL temp_ts_for_all TEMPFILE\
      ‘/u01/app/oracle/database/21.0.0/dbs/temp_file_all’\
      EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M AUTOEXTEND ON;
Temporary Tablespace Hierarchy

When you define local temporary tablespaces and shared (existing) temporary tablespaces, there is a hierarchy that determines how the tablespaces are used.

There can be multiple shared temporary tablespaces in a database, such the default shared temporary tablespace for the database and multiple temporary tablespaces that are assigned to individual users. If a user has a shared temporary tablespace assigned, then that tablespace is used first, otherwise the database default temporary tablespace is used.

Once a tablespace has been selected for spilling during query processing, there is no switching to another tablespace. For example, if a user has a shared temporary tablespace assigned and during spilling it runs out of space, then there is no switching to an alternative tablespace. The spilling, in that case, results in an error. Additionally, remember that shared temporary tablespaces are shared among instances.

The allocation of temporary space for spilling to a local temporary tablespace differs between read-only and read-write instances. For read-only instances, the following is the priority of selecting which temporary location to use for spills:

  1. Allocate from a user's local temporary tablespace.
  2. Allocate from the database default local temporary tablespace.
  3. Allocate from a user's temporary tablespace.
  4. Allocate from the database default temporary tablespace.

Note:

If there is no local temporary tablespace in the database, then read-only instances spill to shared temporary tablespace.

For read-write instances, the priority of allocation differs from the preceding allocation order, because shared temporary tablespaces are given priority, allocating as follows from:

  1. A user’s shared temporary tablespace.
  2. A user’s local temporary tablespace.
  3. The database default shared temporary tablespace.
  4. The database default local temporary tablespace.

Note:

To make read-write instances use a local temporary tablespace, create that temporary tablespace with the LOCAL...FOR ALL option.
Local Temporary Tablespace Features

Review the following information when using local temporary tablespaces.

Instances cannot share local temporary tablespace. Therefore, one instance cannot take local temporary tablespace from another. If an instance runs out of temporary tablespace during spilling, then the statement results in an error.

  • Local temporary tablespaces support only one BIGFILE per tablespace.

  • To address contention issues arising from having only one BIGFILE-based local temporary tablespace, you can assign multiple local temporary tablespaces to different users as the default.

  • A database administrator can specify the default temporary tablespace for a user using ALTER USER syntax. For example:

    ALTER USER MAYNARD LOCAL TEMPORARY TABLESPACE temp_ts;
  • You can configure a user with two default temporary tablespaces:

    • One local temporary when the user is connected to the read-only instance that is running on the reader nodes.

    • One shared temporary tablespace to be used when the same user is connected on the read-write instances that are running on a Hub Node.

Metadata Management of Local Temporary Files

Instance-specific information, such as bitmap for allocation, current size for a temporary file, and the file status, is stored in the SGA and not in control files because such information can vary across instances.

Currently, temporary file information (such as file name, creation size, creation SCN, temporary block size, and file status) is stored in the control file along with the initial and max files, as well as auto extent attributes. However, information about local temporary files in the control file is common to all applicable instances.

When an instance starts, it reads the control file information and creates the temporary files that constitute the local temporary tablespace for that instance. If there are two or more instances running on a node, then each instance has its own local temporary files.

For local temporary tablespaces, there is a separate file for each involved instance. The local temporary file names follow a naming convention such that the instance numbers are appended to the temporary file names that are specified while creating the local temporary tablespace.

For example, assume that a read-only node, N1, runs two Oracle read-only database instances with numbers 3 and 4. The following DDL command creates two files on node N1—/temp/temp_file_3 and /temp/temp_file_4, for instance 3 and 4 respectively:

CREATE LOCAL TEMPORARY TABLESPACE temp_ts TEMPFILE  '/temp/temp_file'\
   EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M AUTOEXTEND ON;

Assuming that there are two read-write instances, instances 1 and 2, and two read-only instances, instances 3 and 4, the following DDL command creates four files—/temp/temp_file_all_1 and /temp/temp_file_all_2 for instances 1 and 2, respectively, and /temp/temp_file_all_3 and /temp/temp_file_all_4 for instances 3 and 4, respectively:

CREATE LOCAL TEMPORARY TABLESPACE FOR ALL  temp_ts_for_all TEMPFILE  '/temp/temp_file_all'\
   EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M AUTOEXTEND ON;
DDL Support for Local Temporary Tablespaces

You can manage local temporary tablespaces and temporary files with either the DDL command ALTER TABLESPACE, or ALTER DATABASE.

Run all local temporary tablespace management and creation DDL commands from the read-write instances. Running all other DDL commands affects all instances in a homogeneous manner.

For example, the following command resizes the temporary files on all read-only instances:

ALTER TABLESPACE temp_ts RESIZE 1G;

For local temporary tablespaces, Oracle supports the allocation options and their restrictions that are currently active for temporary files.

To run a DDL command on a local temporary tablespace on a read-only instance, there must be at least one read-only instance in the cluster. This restriction is not applicable when creating or altering local temporary tablespaces FOR ALL. You can assign a default local temporary tablespace to the database with the clause DEFAULT LOCAL TEMPORARY TABLESPACE appended to the command ALTER DATABASE.

For example:

ALTER DATABASE DEFAULT LOCAL TEMPORARY TABLESPACE temp_ts;

You can specify default temporary tablespaces when creating the database, as follows:

CREATE DATABASE .. DEFAULT TEMPORARY TABLESPACE temp_ts_for_dbtemp_ts TEMPFILE\
   '/temp/temp_file_for_db' EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1M AUTOEXTEND ON;

You cannot specify default local temporary tablespaces using the command CREATE DATABASE. When you create a database, its default local temporary tablespace points to the default shared temporary tablespace. You must run the command ALTER DATABASE to assign an existing local temporary tablespace as the default for the database.

Local Temporary Tablespace for Users

When you create a user without explicitly specifying shared or local temporary tablespace, the user inherits shared and local temporary tablespace from the corresponding default database tablespaces.

You can specify default local temporary tablespace for a user, as follows:

CREATE USER new_user IDENTIFIED BY new_user LOCAL TEMPORARY TABLESPACE temp_ts_for_all;

You can change the local temporary tablespace for a user using the ALTER USER command, as follows:

ALTER USER maynard LOCAL TEMPORARY TABLESPACE temp_ts;

As previously mentioned, default user local temporary tablespace can be shared temporary space. Consider the following items for the ALTER USER...TEMPORARY TABLESPACE command:

  • You can change the user default local temporary tablespace to any existing local temporary tablespace.

  • If you want to set the user default local temporary tablespace to a shared temporary tablespace, T, then T must be the same as the default shared temporary tablespace.

  • If a default user local temporary tablespace points to a shared temporary tablespace, then, when you change the default shared temporary tablespace of the user, you also change the default local temporary tablespace to that tablespace.

Following are some examples of local temporary space management using the command ALTER:

  • To take a local temporary tablespace offline:

    ALTER DATABASE TEMPFILE ‘/temp/temp_file’ OFFLINE;
  • To decrease the size of a local temporary tablespace:

    ALTER TABLESPACE temp_ts SHRINK SPACE KEEP 20M
  • To change the auto-extension attributes of a local temporary file:

    ALTER TABLESPACE temp_ts AUTOEXTEND ON NEXT 20G
  • To resize a local temporary file:

    ALTER TABLESPACE temp_ts RESIZE 10G

    Note:

    When you resize a local temporary file, it applies to individual files.

Some read-only instances may be down when you run the preceding commands. This does not prevent the commands from succeeding because when a read-only instance starts up later, it creates the temporary files based on information in the control file. Creation is fast because Oracle Database reformats only the header block of the temporary file, recording information about the file size, among other things. If you cannot create the temporary files, then the read-only instance stays down. Commands that you submitted from read-write instances are replayed immediately on all open, read-only instances.

Atomicity Requirement for Commands

All of the commands that you run from read-write instances are performed in an atomic manner.

This means that the command succeeds only when it succeeds on all live instances.

Local Temporary Tablespace and Dictionary Views

Oracle extended dictionary views can display information about local temporary tablespaces.

Note the following changes to data dictionary views:

  • All of the diagnosibility information that is related to temporary tablespaces and temporary files that are exposed through AWR, SQL monitor, and other utilities, is also available for local temporary tablespaces and local temporary files. This information is available with the existing dictionary views for temporary tablespaces and temporary files: DBA_TEMP_FILES, DBA_TEMP_FREE_SPACE.

  • The USER_TABLESPACES and DBA_TABLESPACES dictionary view are extended by a column called SHARED, that indicates whether the temporary file is local or shared.

  • The DBA_TEMP_FILES dictionary view is extended by two columns: SHARED and INST_ID. The SHARED column indicates whether the temp file is local or shared. The INST_ID column contains the instance number. For shared temporary files, there is a single row per file and the INST_ID is null. For local temporary files, this column contains information about temporary files for each instance, such as the size of the file in bytes (BYTES column).

  • The DBA_TEMP_FREE_SPACE dictionary view is extended by two columns: SHARED and INST_ID. The SHARED column indicates whether the temporary file is local or shared. The INST_ID column contains the instance number. For shared temporary files, there is a single row for each file and the INST_ID is null. For local temporary files, this column contains information about temporary files for each instance, such as the total free space available (FREE_SPACE column).

  • In the dictionary views, such as DBA_TABLESPACES, Oracle distinguishes the type of the tablespace using the SHARED column with the following values:

    • SHARED: for shared temporary tablespace

    • LOCAL_ON_ALL: for local temporary tablespace on all of the instances

    Note:

    Currently, spills onto temporary tablespace for queries, such as sort and hash join spills, are automatically encrypted. This is also true for spills to local temporary tablespace.

Overview of Oracle RAC Architecture and Processing

Installing Oracle RAC requires software, a network, and a storage configuration.

Oracle RAC requires several components:
  • Oracle Clusterware for concurrent access to the same storage and the same set of data files from all of the nodes in a cluster
  • A communications protocol for enabling interprocess communication (IPC) across all of the nodes in a cluster
  • Multiple database instances that process data as if the data resided on a logically combined, single cache
  • A mechanism for monitoring and communicating the statuses of the nodes in a cluster

Understanding Cluster-Aware Storage Solutions

Learn about the Oracle Real Application Clusters shared everything architecture and what shared everything means for your storage options.

An Oracle RAC database is a shared everything database. All data files, control files, SPFILEs, and redo log files in Oracle RAC environments must reside on cluster-aware shared disks. This enables all of the cluster database instances to access these storage components. Because Oracle RAC databases use a shared everything architecture, Oracle RAC requires cluster-aware storage for all of the database files.

In Oracle RAC, Oracle Database manages the disk access and is certified for use on a variety of storage architectures. It is your choice how to configure your storage, but you must use a supported cluster-aware storage solution. Oracle Database provides the following Oracl RAC storage options:

  • Oracle Automatic Storage Management (Oracle ASM)

    Oracle recommends this solution to manage your storage.

  • A certified cluster file system

    • Oracle recommends Oracle Advanced Cluster File System (Oracle ACFS).

    • A third-party cluster file system on a cluster-aware volume manager that is certified for Oracle RAC. For example:

      • Oracle OCFS2 (Linux only)

      • IBM GPFS (IBM AIX only)

  • Certified network file system (NFS) solution

Oracle RAC and Network Connectivity

All nodes in Oracle RAC environments must connect to at least one Local Area Network (LAN). This network is commonly referred to as the public network, and it enables users and applications to access the database.

In addition to the public network, Oracle RAC requires private network connectivity that is used exclusively for communication among the nodes and database instances that are running on those nodes. This network is commonly referred to as the interconnect.

The interconnect network is a private network that connects all of the servers in a cluster. The interconnect network must use at least one switch and a Gigabit Ethernet adapter.

Note:

  • Oracle supports interfaces with higher bandwidth but does not support using crossover cables with the interconnect.

  • Do not use the interconnect, the private network, for user communication, because Cache Fusion uses the interconnect for interinstance communication.

You can configure Oracle RAC to use either the User Datagram Protocol (UDP) or Reliable Data Socket (RDS) protocols for inter-instance communication on the interconnect. Oracle Clusterware uses the same interconnect using the UDP protocol, but cannot be configured to use RDS.

An additional network connectivity is required when using Network Attached Storage (NAS). Network attached storage can be typical NAS devices, such as NFS filers, or it can be storage that is connected using Fibre Channel over IP, for example. This additional network communication channel should be independent of the other communication channels that Oracle RAC uses (the public and private network communication). If the storage network communication must be converged with one of the other network communication channels, then ensure that storage-related communication gets first priority.

Overview of Using Dynamic Database Services to Connect to Oracle Databases

Applications should use the Dynamic Database Services feature to connect to an Oracle database over the public network.

Dynamic Database Services enable you to define rules and characteristics to control how users and applications connect to database instances. These characteristics include a unique name, workload balancing and failover options, and high availability characteristics.

Users can access an Oracle RAC database using a client/server configuration or through one or more middle tiers, with or without connection pooling. By default, a user connection to an Oracle RAC database is established using the TCP/IP protocol but Oracle supports other protocols. Oracle RAC database instances must be accessed through the SCAN for the cluster.

Overview of Virtual IP Addresses

Node VIPs are virtual IP (VIP) addresses that clients use to connect to Oracle RAC databases.

Oracle Clusterware hosts the node VIP addresses on a public network. The process for a typical connection attempt from a database client to an Oracle RAC database instance can be summarized as follows:

  1. The database client connects to SCAN (which includes a SCAN VIP on a public network), providing the SCAN listener with a valid service name.

  2. The SCAN listener then determines which database instance hosts this service and routes the client to the local or node listener on the respective node.

  3. The node listener, listening on a node VIP and a given port, retrieves the connection request and connects the client to an instance on the local node.

If multiple public networks are used on the cluster to support client connectivity through multiple subnets, then the preceding operation is performed within a given subnet.

If a node fails, then the VIP address fails over to another node on which the VIP address can accept TCP connections, but it does not accept connections to the Oracle database. Clients that attempt to connect to a VIP address that does not reside on its home node receive a rapid connection refused error instead of waiting for TCP connect timeout messages. When the network on which the VIP is configured comes back online, Oracle Clusterware fails back the VIP to its home node, where connections are accepted. Generally, VIP addresses fail over when:

  • The node on which a VIP address runs fails
  • All interfaces for the VIP address fail
  • All interfaces for the VIP address are disconnected from the network

Oracle RAC supports multiple public networks to enable access to the cluster through different subnets. Each network resource represents its own subnet and each database service uses a particular network to access the Oracle RAC database. Each network resource is a resource that is managed by Oracle Clusterware, which enables the VIP behavior previously described.

SCAN is a single network name that is defined either in your organization's Domain Name Server (DNS) or in the Grid Naming Service (GNS) that operates in a round robin order among three IP addresses. Oracle recommends that all of the connections to the Oracle RAC database use the SCAN in their client connection string. Incoming connections are load balanced across the active instances providing the requested service through the three SCAN listeners. With SCAN, you do not have to change the client connection even if the configuration of the cluster changes (nodes added or removed). SCAN fully supports multiple subnets. This means that you can create one SCAN for each subnet in which you want your cluster to operate.

Restricted Service Registration in Oracle RAC

The valid node checking feature provides the ability to configure and dynamically update a set of IP addresses or subnets from which registration requests are allowed by the listener.

Database instance registration with a listener succeeds only when the request originates from a valid node. The network administrator can specify a list of valid nodes, excluded nodes, or disable valid node checking altogether. The list of valid nodes explicitly lists the nodes and subnets that can register with the database. The list of excluded nodes explicitly lists the nodes that cannot register with the database. The control of dynamic registration results in increased manageability and security of Oracle RAC deployments.

By default, the SCAN listener agent sets REMOTE_ADDRESS_REGISTRATION_listener_name to a private IP endpoint. The SCAN listener accepts registration requests only from the private network. Remote nodes that are not accessible to the private network of the SCAN listener must be included in the list of valid nodes by using the registration_invited_nodes_alias parameter in the listener.ora file, or by modifying the SCAN listener using the command-line interface, SRVCTL.

Note:

Starting with Oracle Grid Infrastructure 12c, for a SCAN listener, if the VALID_NODE_CHECKING_REGISTRATION_listener_name and REGISTRATION_INVITED_NODES_listener_name parameters are set in the listener.ora file, then the listener agent overwrites these parameters.

If you use the SRVCTL utility to set the invitednodes and invitedsubnets values, then the listener agent automatically sets VALID_NODE_CHECKING_REGISTRATION_listener_name to SUBNET and sets REGISTRATION_INVITED_NODES_listener_name to the specified list in the listener.ora file.

For other listeners managed by CRS, the listener agent sets VALID_NODE_CHECKING_REGISTRATION_listener_name to be SUBNET in the listener.ora file only if it is not already set in the listener.ora file. The SRVCTL utility does not support setting the invitednodes and invitedsubnets values for a non-SCAN listener. The listener agent does not update REGISTRATION_INVITED_NODES_listener_name in the listener.ora file for a non SCAN listener.

About Oracle RAC Software Components

Oracle RAC databases generally have two or more database instances that each contain memory structures and background processes.

An Oracle RAC database has the same processes and memory structures as single-instance Oracle databases. Oracle RAC also has additional processes and memory structures that are specific to Oracle RAC. Any one instance's database view is nearly identical to any other instance's view in the same Oracle RAC environment; the view is a single-system image of the environment.

Each instance has a buffer cache in its System Global Area (SGA). Using Cache Fusion, Oracle RAC environments logically combine each instance's buffer cache to enable the instances to process data as if the data resided on a logically combined, single cache.

Note:

  • The In-Memory Transaction Manager integrates with the Cache Fusion protocol.

  • The SGA size requirements for Oracle RAC are greater than the SGA requirements for noncluster Oracle databases due to Cache Fusion.

To ensure that each Oracle RAC database instance obtains the block that it requires to satisfy a query or transaction, Oracle RAC instances use two processes, the Global Cache Service (GCS) and the Global Enqueue Service (GES). The GCS and GES maintain records of the statuses of each data file and each cached block using a Global Resource Directory (GRD). The GRD contents are distributed across all of the active instances, which effectively increases the size of the SGA for an Oracle RAC instance.

After one instance caches data, any other instance within the same cluster database can acquire a block image from another instance in the same database faster than by reading the block from disk. Therefore, Cache Fusion moves current blocks between instances rather than re-reading the blocks from disk. When a consistent block is needed or a changed block is required on another instance, Cache Fusion transfers the block image directly between the affected instances. Oracle RAC uses the private interconnect for interinstance communication and block transfers. The GES Monitor and the Instance Enqueue Process manage access to Cache Fusion resources and enqueue recovery processing.

Cache Fusion monitors the latency on the private networks and the service time on the disks, and automatically chooses the best path. If shared disks include low latency SSDs, then Oracle automatically chooses the best path.

About Oracle RAC Background Processes

The global cache service (GCS) and the global enqueue service (GES) processes, along with the global resource directory (GRD) collaborate to enable Cache Fusion. The Oracle RAC processes and their identifiers are as follows:

  • ACMS: Atomic Controlfile to Memory Service (ACMS)

    In Oracle RAC environments, the ACMS process on each instance is an agent that contributes to ensuring a distributed SGA memory update is either globally committed on success or globally aborted if a failure occurs.

  • GTX0-j: Global Transaction Process

    The GTX0-j process provides transparent support for XA global transactions in Oracle RAC environments. The database autotunes the number of these processes based on the workload of XA global transactions.

  • LMON: Global Enqueue Service Monitor

    The LMON process monitors global enqueues and resources across the cluster and performs global enqueue recovery operations.

  • LMD: Global Enqueue Service Daemon

    The LMD process manages incoming remote resource requests within each instance.

  • LMS: Global Cache Service Process

    The LMS process maintains records of the data file statuses and each cached block by recording information in the global resource directory (GRD). The LMS process also controls the flow of messages to remote instances and manages global data block access and transmits block images between the buffer caches of different instances. This processing is part of Cache Fusion.

  • LCK0: Instance Enqueue Process

    The LCK0 process manages non-Cache Fusion resource requests such as library and row cache requests.

  • RMSn: Oracle RAC Management Processes (RMSn)

    The RMSn processes perform manageability tasks for Oracle RAC. Tasks that are accomplished by an RMSn process include the creation of resources that are related to Oracle RAC when new instances are added to the clusters.

  • RSMN: Remote Slave Monitor manages background slave process creation and communication on remote instances. These background slave processes perform tasks on behalf of a coordinating process running in another instance.

Note:

Many of the Oracle Database components that this section describes are in addition to the components that are described for single-instance Oracle databases in Oracle Database Concepts.

Related Topics

Overview of Automatic Workload Management with Dynamic Database Services

Services represent groups of applications with common attributes, service level thresholds, and priorities.

Application functions can be divided into workloads that are identified by services. For example, Oracle E-Business Suite can define a service for each responsibility, such as general ledger, accounts receivable, order entry, and so on. A service can span one or more Oracle Database instances, or multiple databases in a global cluster. A single instance can support multiple services. The number of instances that are serving a service is transparent to the application. Services provide a single system image to manage competing applications, and to enable each workload to be managed as a unit.

Middle tier applications and clients select a service by specifying the service name as part of the connection in the TNS connect string. For example, data sources for Oracle WebLogic Server are set to route to a service. Using Net Easy*Connection, this connection comprises simply the service name and network address, as follows: user_name/password@SCAN/service_name. Server-side work, such as Oracle Scheduler, Parallel Query, and Oracle GoldenGate queues, set the service name as part of the workload definition. For Oracle Scheduler, jobs are assigned to job classes, and job classes run within services. For Parallel Query and Parallel DML, the query coordinator connects to a service, and the parallel query slaves inherit the service for the duration of the parallel execution. For Oracle GoldenGate, streams queues are accessed using services. Work executing under a service inherits the thresholds and attributes for the service and is measured as part of the service.

Oracle Database Resource Manager binds services to consumer groups and priorities. Binding services by groups and priorities enables the database to manage the services in the order of their importance. For example, the DBA can define separate services for high priority online users, and lower priority for internal reporting applications. Likewise, the DBA can define Gold, Silver and Bronze services to prioritize the order in which requests are serviced for the same application. When planning the services for a system, the plan should include the priority of each service relative to the other services. In this way, Oracle Database Resource Manager can satisfy the priority-one services first, followed by the priority-two services, and so on.

When users or applications connect to a database, Oracle recommends that you use a service that is specified in the CONNECT_DATA portion of the connect string. Oracle Database automatically creates one database service when the database is created but the behavior of this service is different from that of database services that you subsequently create. To enable more flexibility in the management of a workload that uses the database, Oracle Database enables you to create multiple services and specify on which instances the services start. If you are interested in greater workload management flexibility, then continue reading this chapter to understand the added features that you can use with services.

Note:

The features discussed in this chapter do not work with the following default database services: DB_NAME, DB_UNIQUE_NAME, PDB_NAME, SYS$BACKGROUND, and SYS$USERS. Oracle strongly recommends that you not use these services for applications to connect to the database. You must create cluster managed services to take advantage of these features. You can only manage the services that you create. Any service that a database creates is automatically managed by the database server.

Dynamic Database Services

Dynamic database services enable you to manage workload distributions to provide optimal performance for users and applications. Dynamic database services offer the following:

  • Services: Oracle Database provides a powerful automatic workload management facility, called services, to enable the enterprise grid vision. Services are entities that you can define in Oracle Real Application Clusters (Oracle RAC) databases that enable you to group database workloads, route work to the optimal instances that are assigned to offer the service, and achieve high availability for planned and unplanned actions.

  • High Availability Framework: An Oracle RAC component that enables Oracle Database to always maintain components in a running state.

  • Fast Application Notification (FAN): Provides information to Oracle RAC applications and clients about cluster state changes and Load Balancing Advisory events, such as UP and DOWN events for instances, services, or nodes. FAN has two methods for publishing events to clients, the Oracle Notification Service daemon, which is used by Java Database Connectivity (JDBC) clients including the Oracle Application Server, and Oracle GoldenGate Advanced Queueing, which is only used by previous releases of Oracle Call Interface (OCI) and Oracle Data Provider for .NET (ODP.NET) clients.

    Note:

    All Oracle Database clients use Oracle Notification Service.

  • Transaction Guard: Provides a protocol and an API for at-most-once running of transactions in case of unplanned outages and duplicate submissions.

  • Application Continuity: Provides a general purpose infrastructure that replays an in-flight request when a recoverable error is received, masking many system, communication, and storage outages, and hardware failures. Unlike existing recovery technologies, this feature attempts to recover the transactional and non-transactional session states beneath the application, so that the outage appears to the application as a delayed execution.

  • Connection Load Balancing: An Oracle Net Services feature that balances incoming connections across all of the instances that provide the requested database service.

  • Load Balancing Advisory: Provides information to applications about the current service levels that the database and its instances provide. The load balancing advisory makes recommendations to applications about where to direct application requests to obtain the best service based on the management policy that you have defined for that service. Load balancing advisory events are published through Oracle Notification Service.

  • Automatic Workload Repository (AWR): Tracks service-level statistics as metrics. You can create server-generated alerts for these metrics when the statistics exceed or fail to meet certain thresholds.

  • Fast Connection Failover (FCF): Enables Oracle Clients to provide rapid failover of connections by subscribing to FAN events.

  • Runtime Connection Load Balancing: Enables Oracle Clients to provide intelligent allocations of connections in the connection pool, based on the current service level provided by the database instances when applications request a connection to complete some work.

  • Single Client Access Name (SCAN): Provides a single name to clients that connect to Oracle RAC that do not change throughout the life of a cluster, even if you add or remove nodes from the cluster. Clients connecting with SCAN can use a simple connection string, such as a thin JDBC URL or EZConnect, and still achieve load balancing and client connection failover goals.

You can deploy Oracle RAC and non-cluster Oracle Database environments to use dynamic database service features in many different ways. Depending on the number of nodes and your environment complexity and objectives, your choices for optimal automatic workload management and high-availability configuration depend on several considerations, which are explained in the Automatic Workload Management topics.

Overview of Server Pools and Policy-Managed Databases

Server pools are the basis for policy-managed databases.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

You can continue to use existing server pools, and create new pools and policies. Resources using existing server pools can continue to use them transparently.

The use of CRS configuration policies and the CRS policy set can be desupported in a future release. In place of server pools and policy-managed databases, Oracle recommends that you use the new "Merged" management style.

You can have Oracle RAC databases, whether multi-node or Oracle Real Application Clusters One Node (Oracle RAC One Node), that use the following deployment models:

  • Administrator-managed deployment requires that you statically configure each database instance to run on a specific node in the cluster, and that you configure database services to run on specific instances belonging to a certain database using the preferred and available designation.

  • Policy-managed deployment is based on server pools, where database services run within a server pool as singleton or uniform across all of the servers in the server pool. Databases are deployed in one or more server pools and the size of the server pools determine the number of database instances in the deployment.

Starting with Oracle Database 21c, the two management policies have been merged into a single management style, with some of the features of each deployment model.

Introduction to Server Pools

Server pools logically apportion a cluster into groups of servers offering database or application services.

Server pool properties control the scalability and availability of those databases and applications. You can configure each server pool with a minimum and maximum size, which determines scalability. Oracle Clusterware manages availability between server pools, and you can further regulate availability by configuring the importance value of individual server pools.

Servers are not assigned to server pools by name but by number. Therefore, you must configure any server to run any database. If you cannot configure servers due to, for example, heterogeneous servers or storage connectivity, then you can restrict servers by using server category definitions to determine server pool membership eligibility.

Examples of Using Server Pools

This section provides examples of using server pools.

Related Topics

Minimum and Maximum Number of Servers

You can use the MIN_SIZE and MAX_SIZE server pool parameters to influence how many servers are in a server pool.

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

Consider a four-node cluster configured into two server pools named online and backoffice. A database named dbsales runs in the online server pool offering the browse, search, and salescart services. A database named dberp runs in the backoffice server pool and offers the inventory and shipping services, as shown in the following image. During normal business hours the enterprise requires a minimum of two instances of the dbsales database and one instance of the dberp database to meet normal demand.

Figure 1-2 Server Placement by Minimum and Maximum Limits

Description of Figure 1-2 follows
Description of "Figure 1-2 Server Placement by Minimum and Maximum Limits"

In this policy-managed deployment, the value of the MIN_SIZE server pool attribute for the online server pool is 2, while the value of the MIN_SIZE server pool attribute for the backoffice server pool is 1. Configured this way, Oracle Clusterware ensures that there are always two servers in the online server pool and one server in the backoffice server pool. Because this is a four-node cluster, there is one server left not assigned to either server pool. Where that last server gets deployed is determined by the MAX_SIZE server pool parameter of each server pool. If the sum of the values of the MAX_SIZE server pool attribute for each server pool is less than the total number of servers in the cluster, then the remaining servers stay in the Free server pool awaiting a failure of a deployed node.

If the value of MAX_SIZE is greater than that of MIN_SIZE, then the remaining server will be deployed into the server pool whose importance value is greatest, as shown in Figure 1-2, and fully discussed in the next section. In this case, the server is a shareable resource that can be relocated online to join the server pool where it is required. For example, during business hours the server could be given to the online server pool to add an instance of the dbsales database but after hours could be relocated to the backoffice server pool, adding a dberp database instance. All such movements are online and instances are shut down, transactionally.

These two policy-managed databases are running only the instances that are required and they can be dynamically increased or decreased to meet demand or business requirements.

IMPORTANCE Attribute of Server Pools

The IMPORTANCE server pool attribute is used at cluster startup and in response to a node failure or eviction.

You can configure server pools with different importance levels to determine which databases are started first and which databases remain online in case there is a multi-node outage.

Consider a four-node cluster that hosts a database named dbapps in two server pools, sales and backoffice. Two services, orderentry and billing, run in the sales server pool, while two other services, erp and reports, run in the backoffice server pool, as shown in the following image. By configuring the value of the IMPORTANCE server pool attribute of the sales server pool higher than that of the backoffice server pool, the services in sales start first when the cluster starts and are always available, even if there is only one server left running after a multi-node failure. The IMPORTANCE server pool attribute enables you to rank services and also eliminates the requirement to run a service on all nodes in a cluster to ensure that it is always available.

Figure 1-3 Server Pool Importance

Description of Figure 1-3 follows
Description of "Figure 1-3 Server Pool Importance"
Consolidation of Databases

Policy-managed deployments facilitate consolidation.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

You can use several different approaches, either discretely or combined, to consolidate Oracle databases. In the case of schema consolidation, where multiple applications are being hosted in a single database separated into discrete schemas or pluggable databases (PDBs), you can use server pools to meet required capacity. Because of the dynamic scaling property of server pools, you can increase or decrease the number of database instances to meet current demand or business requirements. Since server pools also determine which services run together or separately, you can configure and maintain required affinity or isolation.

When it is not possible to use schema consolidation (for example, because of version requirements), you can host multiple databases on a single set of servers. Using policy-managed databases facilitates this database consolidation because they can share the same server pool by making use of instance caging, which enables you to dynamically increase or decrease databases, both horizontally (using server pool size) and vertically (using the CPU_COUNT server configuration attribute) to meet demand or business policies and schedules.

By contrast, with administrator-managed databases, you are required to reserve capacity on each server to absorb workload failing over should a database instance or server fail. With policy-managed databases, however, you can effectively rank server pools by the business necessity of the workloads that they are running using the MIN_SIZE, MAX_SIZE, and IMPORTANCE server pool attributes.

When the failure of a server brings a server pool to below its configured minimum number of servers, another server will move from a less important server pool to take its place and bring the number of servers back up to the configured minimum. This eliminates the risk of cascade failures due to overloading the remaining servers and enables you to significantly reduce or even eliminate the need to reserve capacity for handling failures.

Migrating or converting to policy-managed databases also enables cluster consolidation and creates larger clusters that have greater availability and scalability because of the increased number of servers available to host and scale databases. Because policy-managed databases do not require binding their instance names to a particular server and binding services to particular instances, the complexity of configuring and managing large clusters is greatly reduced.

An example deployment is shown in the following figure where the previous two cluster examples (shown in Figure 1-2 and Figure 1-3) are consolidated into a single cluster, making use of both database consolidation (using instance caging) and cluster consolidation (using server pools) configured so that workloads are properly sized and prioritized.

Figure 1-4 Consolidating Databases

Description of Figure 1-4 follows
Description of "Figure 1-4 Consolidating Databases"

Deploying Policy-Managed Databases

When you deploy a policy-managed database, you must first determine the services and their required sizing, taking into account that services cannot span server pools.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

If you are going to collocate this database with other databases, then you should factor in its CPU requirements relative to the other hosted databases, and also factor in the value of its CPU_COUNT attribute for instance caging, so that you can size the database both vertically and horizontally in one or more server pools.

If you are going to collocate the server pools for this database with other server pools, then consider configuring the server pools to adjust the server pool sizes on a calendar or event basis to optimize meeting demand and business requirements. Once you have determined the sizes of the server pools, and configured the appropriate values for the MIN_SIZE and MAX_SIZE server pool attributes, you can then determine the relative importance of each server pool.

You, as the cluster administrator, create policy-managed database server pools using the srvctl add serverpool command. You can modify the properties of the server pool using the srvctl modify serverpool command in the Oracle Grid Infrastructure home.

While it is possible to create a server pool using DBCA, Oracle recommends this only for small, single server pool deployments, because DBCA will fail if servers are already allocated to other server pools. Additionally, if the cluster is made up of servers with different capacities, such as old and new servers, Oracle recommends that you set up server category definitions defining the minimum server requirements for a server to join each server pool.

After you create the server pools, you can run DBCA from the appropriate database home. Depending on the database type and task, you will be presented with different default options. For all new Oracle RAC and Oracle RAC One Node databases, including container databases (CDBs), the Policy-Managed option is the default and the option that Oracle recommends.

If you are upgrading your database from an administrator-managed database, then you will not have the option to directly upgrade to a policy-managed database. After you upgrade, however, you can convert the database to policy managed using the srvctl modify database command.

When you convert from an administrator-managed database to a policy-managed database, the instance names are automatically updated to include the underscore (for example: orcl1 becomes orcl_1). The underscore is required so that the database can automatically create instances when a server pool grows in size.

Managing Policy-Managed Databases

Managing a policy-managed database requires less configuration and reconfiguration steps than an administrator-managed one with respect to creation, sizing, patching, and load balancing.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

You can continue to use existing server pools, and create new pools and policies. Resources using existing server pools can continue to use them transparently.

The use of CRS configuration policies and the CRS policy set can be desupported in a future release. In place of server pools and policy-managed databases, Oracle recommends that you use the new "Merged" management style.

With existing policy-managed instances, because any server in the server pools within the cluster can run any of the databases, you do not have to create and maintain database instance-to-node-name mappings. If, however, you want a database to use a specific instance name whenever it runs on a particular node, then you can create instance-to-node-name mappings using the srvctl modify instance -db db_unique_name -instance inst_name -node node_name command. This can be useful when scripts on a particular node connect to the database using a fixed ORACLE_SID value.

You can perform maintenance tasks such as patching by relocating servers into the Free pool or by adjusting the server pool minimum and maximum sizes, thereby retaining required availability.

Policy-managed databases also facilitate the management of services, because they are assigned to a single server pool and run as singletons or uniform across all servers in the pool. You no longer have to create or maintain explicit preferred and available database instance lists for each service. If a server moves into a server pool because of manual relocation or a high availability event, all uniform services and their dependent database instances are automatically started. If a server hosting one or more singleton services goes down, those services will automatically be started on one or more of the remaining servers in the server pool. In the case of Oracle RAC One Node, the corresponding database instance will also be started automatically.

Managing services relative to each other is improved by making use of the importance attribute of each server pool. Each service running in a server pool inherits the server pool's importance relative to the other server pool-hosted services in the cluster. If the minimum size of the most important server pool is greater than zero, then the services and associated database instances in that server pool are started first on cluster startup and will be the last services and database instances running, as long as there is one server running in the cluster. You can offer services not critical to the business in the least important server pool, ensuring that, should sufficient resources not be available due to demand or failures, those services will eventually be shut down and the more business-critical services remain available.

Because many management tasks may involve making changes that can affect multiple databases, services, or server pools in a consolidated environment, you can use the evaluate mode for certain SRVCTL commands to get a report of the resource impact of a command.

Consider the following example, that evaluates the effect on the system of modifying a server pool:

$ srvctl modify srvpool -l 3 -g online -eval

Service erp1 will be stopped on node test3
Service reports will be stopped on node test3
Service inventory will be stopped on node test3
Service shipping will be stopped on node test3
Database dbsales will be started on node test3
Service orderentry will be started on node test3
Service billing will be started on node test3
Service browse will be started on node test3
Service search will be started on node test3
Service salescart will be started on node test3
Server test3 will be moved from pool backoffice to pool online

As shown in the preceding example, modifying a server pool can result in many resource state changes. You can use a policy set through either Oracle Clusterware or Oracle Database Quality of Service Management.

Policy-Based Cluster Management

Oracle Clusterware supports the management of a cluster configuration policy set as a native Oracle Clusterware feature.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

You can continue to use existing server pools, and create new pools and policies. Resources using existing server pools can continue to use them transparently.

The use of CRS configuration policies and the CRS policy set can be desupported in a future release. In place of server pools and policy-managed databases, Oracle recommends that you use the new "Merged" management style.

A cluster configuration policy contains one definition for each server pool that is defined in the system. A cluster configuration policy also specifies resource placement and cluster node availability. A cluster configuration policy defines the names of all of the server pools that are configured in a cluster, and contains one or more configuration policies.

There is always only one configuration policy in effect at any one time. However, administrators typically create several configuration policies to reflect the different business needs and demands based on calendar dates or time of day parameters. For instance, morning hours during business days are typically when most users log in and download their email; email-related workloads are usually light at nighttime and on weekends. In such cases, you can use cluster configuration policies to define the server allocation based on the expected demand. More specifically for this example, a configuration policy that allocates more servers to OLTP workloads is in effect during workday mornings, and another configuration policy allocates more servers to batch workloads on weekends and workday evenings.

Using cluster configuration policies can also help manage clusters that comprise servers of different capabilities, such as different computer and memory sizes (heterogeneous). To create management and availability policies for clusters comprised of heterogeneous server types, the cluster administrator can create server categories based on server attributes. These attributes can restrict which servers can be assigned to which server pools. For example, if you have some servers in a cluster that run older hardware, then you can use an attribute to specify that these servers should only be allocated to the server pools that support batch jobs and testing, instead of allocating them to the server pools that are used for online sales or other business-critical applications.

Overview of Oracle Database Quality of Service Management

Oracle Database Quality of Service Management (Oracle Database QoS Management) is an automated, policy-based product that monitors the workload requests for an entire system.

Oracle Database QoS Management manages the resources that are shared across applications, and adjusts the system configuration to keep the applications running at the performance levels needed by your business. Oracle Database QoS Management responds gracefully to changes in system configuration and demand, thus avoiding additional oscillations in the performance levels of your applications.

Oracle Database QoS Management monitors and manages Oracle RAC database workload performance objectives by identifying bottlenecked resources impacting these objectives, and both recommending and taking actions to restore performance. Administrator-managed deployments bind database instances to nodes but policy-managed deployments do not, so the Oracle Database QoS Management server pool size resource control is only available for the latter. All other resource management controls are available for both deployments.

Note:

Starting with Oracle Grid Infrastructure 21c, policy-managed databases are deprecated.

You can continue to use existing server pools, and create new pools and policies. Resources using existing server pools can continue to use them transparently.

The use of CRS configuration policies and the CRS policy set can be desupported in a future release. In place of server pools and policy-managed databases, Oracle recommends that you use the new "Merged" management style.

Oracle Database QoS Management supports administrator-managed Oracle RAC and Oracle RAC One Node databases with its Measure-Only, Monitor, and Management modes. This enables schema consolidation support within an administrator-managed Oracle RAC database by adjusting the CPU shares of performance classes running in the database. Additionally, database consolidation is supported by adjusting CPU counts for databases hosted on the same physical servers.

Because administrator-managed databases do not run in server pools, the ability to expand or shrink the number of instances by changing the server pool size that is supported in policy-managed database deployments is not available for administrator-managed databases. This new deployment support is integrated into the Oracle QoS Management pages in Oracle Enterprise Manager Cloud Control.

Overview of Hang Manager

Hang Manager is an Oracle Database feature that automatically detects and resolves system hangs.

Hang Manager initially identified system hangs and then dumped the relevant information about the hang into a trace file. In Oracle Database 12c release 2 (12.2) and later releases, Hang Manager can take action on and attempt to resolve the system hang. Hang Manager also runs in both single-instance and Oracle RAC database instances.

Hang Manager functions, as follows:

  • First detects a system hang and then analyzes the hang and verifies the cause of the hang. It then applies heuristics to decide on a course of action to resolve the hang.

  • Automates the tasks that used to require manual steps by a DBA to provide the trace files to My Oracle Support so that someone there could identify the source of the hang, minimizing or eliminating database and application downtime.

  • Periodically scans all processes and analyzes a smaller subset of processes that are holding resources in successive scans. Hang manager ignores processes if there is nothing waiting on the resource.

  • Considers cross-instance hangs, which are hangs where the holder is a database process waiting on a response from an Oracle ASM instance.

  • Is aware of processes running in reader nodes instances, and checks whether any of these processes are blocking progress on Hub Nodes and takes action, if possible.

  • Considers the Oracle Database Quality of Service Management settings of the holder.

  • Terminates the holder process so the next process waiting on that resource can move on and prevent a hang.

  • Notifies a DBA with an ORA-32701 error message in the alert log.

Overview of Database In-Memory and Oracle RAC

Every Oracle RAC node has its own In-Memory (IM) column store. By default, populated objects are distributed across all IM column stores in the cluster.

Oracle recommends that you size the IM column stores equally on every Oracle RAC node. If an Oracle RAC node does not require an IM column store, then set the INMEMORY_SIZE parameter to 0.

Starting with Oracle Database 19c, Database In-Memory has a new Base Level feature that allows you to use Database In-Memory with up to a 16 GB column store without requiring the Database In-Memory option. In an Oracle RAC database, the INMEMORY_SIZE setting in each database instance must not exceed 16 GB. Set the INMEMORY_FORCE parameter to BASE_LEVEL to enable this feature.

It is possible to have completely different objects populated on every node, or to have larger objects distributed across all of the IM column stores in the cluster. On Oracle Engineered Systems, it is also possible for the same objects to appear in the IM column store on every node. The distribution of objects across the IM column stores in a cluster is controlled by two subclauses to the INMEMORY attribute: DISTRIBUTE and DUPLICATE.

In an Oracle RAC environment, an object that only has the INMEMORY attribute specified is automatically distributed across the IM column stores in the cluster. You can use the DISTRIBUTE clause to specify how an object is distributed across the cluster. By default, the type of partitioning used (if any) determines how the object is distributed. If the object is not partitioned, then it is distributed by rowid range. Alternatively, you can specify the DISTRIBUTE clause to override the default behavior.

On an Oracle Engineered System, you can duplicate or mirror populated objects across the IM column stores in the cluster. This technique provides the highest level of redundancy. The DUPLICATE clause controls how an object is duplicated. If you specify only DUPLICATE, then one mirrored copy of the data is distributed across the IM column stores in the cluster. To duplicate the entire object in each IM column store, specify DUPLICATE ALL.

Note:

When you deploy Oracle RAC on a non-Engineered System, the DUPLICATE clause is treated as NO DUPLICATE.

Overview of Managing Oracle RAC Environments

When managing Oracle RAC, there are many considerations, such as the deployment type, the tools to use, how to monitor the system, and how to evaluate performance.

About Designing and Deploying Oracle RAC Environments

Any enterprise that is designing and implementing a high availability strategy with Oracle RAC must begin by performing a thorough analysis of the business drivers that require high availability.

An analysis of business requirements for high availability combined with an understanding of the level of investment required to implement different high availability solutions enables the development of a high availability architecture that achieves both business and technical objectives.

See Also:

For help choosing and implementing the architecture that best fits your availability requirements:

  • "Design and Deployment Techniques" provides a high-level overview you can use to evaluate the high availability requirements of your business.

  • Oracle Database High Availability Overview and Best Practices describes how to select the most suitable architecture for your organization, describes several high availability architectures, and provides guidelines for choosing the one that best meets your requirements, and also provides information about the Oracle Maximum Availability Architecture

About Administrative Tools for Oracle RAC Environments

You administer a cluster database as a single-system image using the Server Control Utility (SRVCTL), Oracle Enterprise Manager, SQL*Plus, and other utilities.

  • Server Control Utility (SRVCTL): SRVCTL is a command-line interface that you can use to manage an Oracle RAC database from a single point. You can use SRVCTL to start and stop the database and instances and to delete or move instances and services. You can also use SRVCTL to manage configuration information, Oracle Real Application Clusters One Node (Oracle RAC One Node), Oracle Clusterware, and Oracle ASM.

  • Fleet Patching and Provisioning: Use Fleet Patching and Provisioning to patch, upgrade, and provision Oracle RAC databases.

  • Oracle Enterprise Manager: Oracle Enterprise Manager Cloud Control GUI interface for managing both noncluster database and Oracle RAC database environments. Oracle recommends that you use Oracle Enterprise Manager to perform administrative tasks whenever feasible.

    You can use Oracle Enterprise Manager Cloud Control to also manage Oracle RAC One Node databases.

  • SQL*Plus: SQL*Plus commands operate on the current instance. The current instance can be either the local default instance on which you initiated your SQL*Plus session, or it can be a remote instance to which you connect with Oracle Net Services.

  • Cluster Verification Utility (CVU): CVU is a command-line tool that you can use to verify a range of cluster and Oracle RAC components, such as shared storage devices, networking configurations, system requirements, and Oracle Clusterware, in addition to operating system groups and users. You can use CVU for preinstallation checks and for postinstallation checks of your cluster environment. CVU is especially useful during preinstallation and during installation of Oracle Clusterware and Oracle RAC components. Oracle Universal Installer runs CVU after installing Oracle Clusterware and Oracle Database to verify your environment.

    Install and use CVU before you install Oracle RAC to ensure that your configuration meets the minimum Oracle RAC installation requirements. Also, use CVU for verifying the completion of ongoing administrative tasks, such as node addition and node deletion.

  • DBCA: The recommended utility for creating and initially configuring Oracle RAC, Oracle RAC One Node, and Oracle noncluster databases.

  • NETCA: Configures the network for your Oracle RAC environment.

See Also:

  • "Administering Database Instances and Cluster Databases" for an introduction to Oracle RAC administration using SRVCTL, Oracle Enterprise Manager, and SQL*Plus

  • "Monitoring Oracle RAC and Oracle Clusterware"

  • "Server Control Utility Reference" for SRVCTL reference information

  • Oracle Clusterware Administration and Deployment Guide for information about the Cluster Verification Utility (CVU), in addition to other Oracle Clusterware tools, such as the OIFCFG tool for allocating and deallocating network interfaces and the OCRCONFIG command-line tool for managing OCR

  • Oracle Database Net Services Administrator's Guide for more information about NETCA

About Monitoring Oracle RAC Environments

Web-based Oracle Enterprise Manager Cloud Control enables you to monitor an Oracle RAC database.

Oracle Enterprise Manager Cloud Control is a central point of control for the Oracle environment that you access by way of a graphical user interface (GUI). See "Monitoring Oracle RAC and Oracle Clusterware" for more information about using Oracle Enterprise Manager to monitor Oracle RAC environments.

Also, note the following recommendations about monitoring Oracle RAC environments:

  • Use Oracle Enterprise Manager Cloud Control to initiate cluster database management tasks.

  • Use Oracle Enterprise Manager Cloud Control to administer multiple or individual Oracle RAC databases.

  • Use the global views (GV$ views), which are based on V$ views. The catclustdb.sql script creates the GV$ views. Run this script if you do not create your database with DBCA. Otherwise, DBCA runs this script for you.

    For almost every V$ view, there is a corresponding global GV$ view. In addition to the V$ information, each GV$ view contains an extra column named INST_ID, which displays the instance number from which the associated V$ view information was obtained.

  • Use the sophisticated management and monitoring features of the Oracle Database Diagnostic and Tuning packs within Oracle Enterprise Manager that include the Automatic Database Diagnostic Monitor (ADDM) and Automatic Workload Repository (AWR).

    Note:

    Although Statspack is available for backward compatibility, Statspack provides reporting only. You must run Statspack at level 7 to collect statistics related to block contention and segment block waits.

About Evaluating Performance in Oracle RAC Environments

You do not need to perform special tuning for Oracle RAC; Oracle RAC scales without special configuration changes. If your application performs well on a noncluster Oracle database, then it will perform well in an Oracle RAC environment. Many of the tuning tasks that you would perform on a noncluster Oracle database can also improve Oracle RAC database performance. This is especially true if your environment requires scalability across a greater number of CPUs.

Some of the performance features specific to Oracle RAC include:

  • Dynamic resource allocation

    • Oracle Database dynamically allocates Cache Fusion resources as needed

    • The dynamic mastering of resources improves performance by keeping resources local to data blocks

  • Cache Fusion enables a simplified tuning methodology

    • You do not have to tune any parameters for Cache Fusion

    • No application-level tuning is necessary

    • You can use a bottom-up tuning approach with virtually no effect on your existing applications

  • More detailed performance statistics

    • More views for Oracle RAC performance monitoring

    • Oracle RAC-specific performance views in Oracle Enterprise Manager