Oracle Real Application Clusters 26ai Technical Architecture
October 2025
Copyright © 2020, 2025 Oracle and/or its affiliates
Copyright © 1994, 2025, Oracle and/or its affiliates.
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.
The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.
If this is software, software documentation, data (as defined in the Federal Acquisition Regulation), or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, then the following notice is applicable:
U.S. GOVERNMENT END USERS: Oracle programs (including any operating system, integrated software, any programs embedded, installed, or activated on delivered hardware, and modifications of such programs) and Oracle computer documentation or other Oracle data delivered to or accessed by U.S. Government end users are "commercial computer software," "commercial computer software documentation," or "limited rights data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, reproduction, duplication, release, display, disclosure, modification, preparation of derivative works, and/or adaptation of i) Oracle programs (including any operating system, integrated software, any programs embedded, installed, or activated on delivered hardware, and modifications of such programs), ii) Oracle computer documentation and/or iii) other Oracle data, is subject to the rights and limitations specified in the license contained in the applicable contract. The terms governing the U.S. Government's use of Oracle cloud services are defined by the applicable contract for such services. No other rights are granted to the U.S. Government.
This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.
Oracle®, Java, MySQL, and NetSuite are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Inside are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Epyc, and the AMD logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.
This software or hardware and documentation may provide access to or information about content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services unless otherwise set forth in an applicable agreement between you and Oracle. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services, except as set forth in an applicable agreement between you and Oracle.
Oracle AI Database can be configured in two basic ways: as a single instance (SI) non-clustered database and an Oracle Real Application Clusters (Oracle RAC) database.
A single instance database has the Oracle software, database files, and processes all on one server. It has a one-to-one relationship between the instance and the database files. It is the most basic deployment option of the Oracle AI Database.
Oracle RAC environments, however, have a one-to-many relationship between the database and instances. An Oracle RAC database can have up to 100 instances running on different servers, all of which access one database concurrently. Oracle RAC instances share access to datafiles and control files on a shared storage. Oracle RAC provides unmatched high availability, scalability, and load balancing. This deployment provides advanced clustering capabilities, enabling multiple database instances to work together seamlessly for improved performance and fault tolerance. Oracle RAC cluster is a combination of Oracle Grid Infrastructure (Oracle Clusterware + Oracle ASM) and Oracle Real Application Clusters database.
Oracle Real Application Clusters (Oracle RAC) provides a high availability and scalable database solution. All Oracle Real Application Clusters have certain things in common, each cluster has Oracle Grid Infrastructure installed locally on each server. Oracle Grid Infrastructure also includes Oracle Automatic Storage Management (Oracle ASM) and Oracle Clusterware.
Each Instance in the Oracle RAC accesses the database files stored on shared storage. Each node also runs its own operating system and requires local storage, which is used to store Oracle Grid Infrastructure and Oracle AI Database software. In an Oracle RAC environment, while most files like data files, control files, and generally redo log files are shared across all instances to ensure consistency and availability, certain files can be local to each instance. These files include local temporary files, trace files, instance-specific redo logs, dump files, and optionally the password files.
Each node needs at least one public network for client connections and a private network for inter-node communication. Each network may use multiple network interface cards (NICs) to increase bandwidth, availability, and fault tolerance. Oracle recommends that each network use multiple network interface cards per node and multiple network switches in each network to avoid a single point of failure. The Oracle RAC option with Oracle AI Database enables you to cluster database instances. Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.
Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates.
An Oracle Real Application Clusters (Oracle RAC) database has multiple database Instances hosted on a RAC Cluster. These instances access a single set of database files on shared storage. Typically, the shared storage is provided by an Oracle ASM instance local to the node hosting the Oracle RAC database instance. Oracle ASM instances can run on a separate physical server from the database servers and a server can access Oracle ASM from another server in the cluster. Every server does not necessarily have a local Oracle ASM instance.
Shared storage is a high-performance storage system that is accessible by all cluster nodes. It stores Oracle RAC database files, including datafiles, redo logs, and control files, ensuring all instances have access to the most recent data.
Oracle RAC database instance is a set of processes and memory that operates on the database files. Oracle RAC database instance communicates through the database processes and Oracle Clusterware with other Oracle RAC database instances in the same cluster. Oracle RAC database instance has the same memory and background processes as a single instance database, with the addition of Cache Fusion protocol implemented by database processes. Cache Fusion is a disk-less cache coherency mechanism in Oracle RAC that provides copies of blocks directly from a holding instance's memory cache to a requesting instance's memory cache. This provides a logical merging of the System Global Area (SGA) across the database instances in the cluster. Global processes (not all are shown) provide the necessary services to maintain the cache coherency, and recovery in the event of a node failure.
Cache Fusion use the private network (interconnect) for communication between cluster nodes. The private network facilitates communication between Oracle RAC instances. This includes the exchange of information related to data consistency, cache management, and lock management.
Oracle Real Application Clusters One Node (Oracle RAC One Node) is a single database instance of Oracle Real Application Clusters (Oracle RAC) that runs on one node in a cluster. RAC One Node provides high availability by eliminating the database server as a single point of failure and takes further advantage of clustering to apply rolling patches and database service relocation without incurring significant downtime. Oracle RAC One Node is easily upgradable to a full multi-instance Oracle Real Application Clusters configuration.
The database instance can be relocated either manually or automatically to another node in the cluster while maintaining application continuity. RAC One node instances may co-exist with other Oracle RAC database instances in the same cluster, assuming there are enough servers with sufficient resources to service all the instances.
The main difference between Oracle RAC One Node and Oracle RAC is that with an Oracle RAC One Node Database there is only one instance running at a time under normal operation. Should this one instance be impacted by planned or unplanned downtime, the instance relocates to another server.
In an Oracle Real Application Clusters (Oracle RAC) environment, database files are stored and managed across multiple cluster nodes. Every instance can access the data files, including the data files associated with Undo tablespaces and control files. Each Instance has an Undo Tablespace dedicated to it that is stored on the shared storage. All redo log files are accessible to all instances, each redo log file is multiplexed as in a single instance. When using Oracle ASM with normal redundancy each redo log member is mirrored, and a second Multiplexed member is placed in a different disk group. Each instance has at least two redo log groups also called a thread. The Oracle RAC database files include:
A common (default) SPFILE with both the common parameters and instance specific parameters is highly recommended in shared storage. The SPFILE can be stored in Oracle ASM. Some initialization parameters are the same across all instances in a cluster, and some are unique per instance.
In an Oracle RAC environment, while most files like data files, control files, and generally redo log files are shared across all instances to ensure consistency and availability, certain files can be local to each instance. These include local temporary files, trace files, instance-specific redo logs, dump files, and optionally password files. The Database Password file (orapwd*) may be placed in shared storage, or may be on each local node. This file contains the passwords of privileged user and must be available when the database in NOT open.
A redo thread is a set of redo log files that an Oracle instance uses to log changes made to the database. Each instance's redo thread must contain at least two redo log groups. Each redo log group contain at least two members: a redo log and its mirrored copy. If you create your Oracle RAC database using Oracle DBCA, then your Oracle RAC database automatically implements a configuration that meets the Oracle recommendations. Oracle highly recommends to use Oracle Managed Files (OMF) on Oracle ASM.
In an Oracle RAC database, all the redo log files reside on shared storage. In addition, each instance has access to the redo log files of all the other instances in the cluster. If your Oracle RAC database uses Oracle ASM, then Oracle ASM manages the shared storage for the redo log files and the access to these files.
In an Oracle RAC database, each instance writes and archives the redo log groups in its redo thread in the same manner that single-instance Oracle AI Databases do. However, in recovery mode, the instance performing the recovery can read and process all the redo threads for the database, regardless of which instance generated the redo thread. Being able to read all the redo threads enables a running instance to recover the work completed by one or more failed instances.
For example, assume that you have an Oracle RAC database with two instances, instance A and instance B. If instance A is down, then instance B can read the redo log files for both instance A and B to ensure a successful recovery. Users can continue to access and update the database without waiting for the failed instance to be restarted.
The diagram assumes the use of Oracle ASM Diskgroups.
Automatic Undo Management (AUM) automatically manages undo segments within a specific undo tablespace that is assigned to an instance. Under normal circumstances, only the instance assigned to the undo tablespace can modify the contents of that tablespace. However, all instances can always read all undo blocks for consistent reads. Also, any instance can update any undo tablespace during transaction recovery, as long as that undo tablespace is not currently used by another instance for undo generation or transaction recovery.
You can dynamically switch undo tablespace assignments by executing the ALTER SYSTEM SET UNDO_TABLESPACE statement. You can run this command from any instance. In the example above, the previously used undo tablespace, unodtbs1, assigned to the RAC01 instance remains assigned to it until RAC01's last active transaction commits. The pending offline tablespace may be unavailable for other instances until all transactions against that tablespace are committed. You cannot simultaneously use Automatic Undo Management (AUM) and manual undo management in an Oracle RAC database. It is highly recommended that you use the AUM mode described on this slide.
Single Client Access Name (SCAN) is a single name assigned to the cluster. Instead of connecting directly to individual node listeners, clients can connect to a database instance via a SCAN listener using a SCAN VIP. The SCAN listener is responsible for routing connections to your cluster nodes ensuring load balancing. It simplifies drastically your network setup when the number of cluster nodes is important. Typically there are 1 to 3 SCAN listeners per cluster, regardless of the number of nodes. In addition, SCAN VIP addresses can failover between nodes. The Oracle RAC listener is a process that listens for incoming client connection requests and manages the traffic to the database instance. Listeners use node VIP addresses tied to specific nodes to receive connections to that node.
Each database listener registers with the SCAN listener and updates it with the current load metric. The SCAN listener receives requests and routes the requests to the Local listener on the least loaded node. The local listener is using a virtual IP (VIP) address. A VIP is an IP address that is not bound to a specific network interface card. It can be moved between nodes in the cluster and is a resource managed by Oracle Clusterware. VIPs are used to provide faster connection failover and avoid TCP/IP timeout issues when a node fails.
Each node will have a public IP address assigned to the network interface for that particular node.
Oracle Real Application Clusters (Oracle RAC) provides two service-based load balancing options - Connection Load Balancing (CLB) and Runtime Load Balancing (RLB). Load balancing enables fine-grained control over how client connections are distributed among instances within an Oracle RAC cluster.
CLB and RLB are essential Oracle RAC features that offer advanced load balancing capabilities at both connection establishment and runtime levels. They improve system responsiveness, optimize resource utilization, and ensure high availability across the database cluster.
A Clustered Node is a server that is a member or a planned member of a Cluster. There may be one or many nodes in a cluster. Oracle ASM (Automatic Storage Management) is Oracle's integrated storage manager specifically designed for managing database files. It simplifies the storage and management of Oracle AI Database files by abstracting and automating storage management tasks, making it particularly useful in large-scale and high-availability environments like Oracle RAC. Oracle Clusterware is a cluster management software that manages Oracle RAC environments. Its processes maintain node health, enable high availability, and coordinate resource management across the cluster. Together, they ensure Oracle RAC and applications are highly available, resilient, and seamlessly managed in clustered environments.
Each node has local storage. Local storage provides for the OS needs and the Oracle Grid Infrastructure and Oracle RAC software installation. Every clustered node has an instance of Oracle Clusterware running. The node-specific diagnostics and configuration information is stored in the local storage.
A node may have one or more other instances running an Oracle RAC database or an application. It is possible to install the software for these instances on shared storage, but it is recommended that these be installed on local storage to allow rolling upgrades, or relocation. However, using local storage also increases your software maintenance costs, because each Oracle home must be upgraded or patched separately.
Each Node has a network connection to what is called the public network. This network provides the connection applications that use the database or clustered application. Each Node has a private network connection also called the interconnect that provides communication between the nodes of a cluster.
A cluster requires a shared storage area that is accessible to all the cluster members. The supported types of shared storage depend upon the platform you are using, for example:
The storage is presented to the Operating system as block storage such as a partition, or as a file system such as typical mount point and directory structure as with Oracle ACFS.
Any shared storage must be cluster aware and certified for use with Oracle RAC Database.
An Oracle Extended Cluster consists of nodes that are located in multiple locations called sites. For more information see Oracle Extended Clusters.
When you deploy an Oracle cluster, you can also choose to configure the cluster as an Oracle Extended Cluster. You can extend an Oracle RAC cluster across two, or more, geographically separate sites, each equipped with its own storage. In the event that one of the sites fails, the other site acts as an active standby. The storage is replicated, so that each site may function when the other is unavailable.
Both Oracle ASM and the Oracle AI Database stack, in general, are designed to use enterprise-class shared storage in a data center. Fibre Channel technology, however, enables you to distribute compute and storage resources across two or more data centers, and connect them through Ethernet cables and Fibre Channel, for compute and storage needs, respectively.
You can configure an Oracle Extended Cluster when you install Oracle Grid Infrastructure. You can also do so post installation using the ConvertToExtended script. You manage your Oracle Extended Cluster using CRSCTL.
Oracle recommends that you deploy Oracle Extended Clusters with normal redundancy disk groups. You can assign nodes and failure groups to sites. Sites contain failure groups, and failure groups contain disks. For normal redundancy disk groups, a disk group provides one level of failure protection, and can tolerate the failure of either a site or a failure group.
Note: Recommended distance between sites is less than 100km.
Oracle Cluster Registry (OCR) stores Cluster configuration information. You can have up to five OCR locations. Each OCR location resides on shared storage that is accessible by all nodes in the cluster. The OCR relies on distributed shared cache architecture for optimizing queries, and cluster-wide atomic updates against the cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with the CRSD that accesses its OCR cache. Only one of the CRSD processes actually reads from and writes to the OCR file on shared storage. This process refreshes its own local cache, as well as the OCR cache on other nodes in the cluster. For queries against the cluster repository, the OCR clients communicate directly with the local CRS daemon (CRSD) process on the node from which they originate. When clients need to update the OCR, they communicate through their local CRSD process to the CRSD process that is performing input/output (I/O) for writing to the repository on disk.
The main OCR client applications are CRSCTL, OUI, SRVCTL, Enterprise Manager (EM), the Oracle Database Configuration Assistant (Oracle DBCA), the Oracle Database Upgrade Assistant (Oracle DBUA), Oracle Network Configuration Assistant (Oracle NETCA), and the Oracle ASM Configuration Assistant (Oracle ASMCA). Furthermore, OCR maintains dependency and status information for application resources defined within Oracle Clusterware, specifically databases, instances, services, and node applications.
Note: In the diagram in the slide, note that a client process might also exist on server 2 but is not shown for the sake of clarity.
Cluster Synchronization Services (CSS) is the service that determines which nodes in the cluster are available, provides cluster group membership and simple locking services to other processes. CSS typically determines node availability via communication through the private network with a voting file used as a secondary communication mechanism. This is done by sending network heartbeat messages through the private network and writing a disk heartbeat to the voting file, as illustrated by the top graphic in the slide. The voting file resides on a clustered file system that is accessible to all nodes in the cluster. Its primary purpose is to help in situations where the private network communication fails. The voting file is then used to communicate the node state information to the other nodes in the cluster. Without the voting file, it can be difficult for isolated nodes to determine whether it is experiencing a network failure or whether the other nodes are no longer available. It would then be possible for the cluster to enter a state where multiple sub-clusters of nodes would have unsynchronized access to the same database files. The bottom graphic illustrates what happens when Node3 can no longer send network heartbeats to other members of the cluster. When other nodes can no longer see Node3's heartbeats, they decide to evict that node by using the voting disk. When Node3 reads the removal message or "kill block," it generally reboots itself to ensure that all outstanding write I/Os are lost.
A "split brain" condition is generally where one of the nodes cannot talk to other nodes via network but is still able to access voting disk, and can happen for any number of reasons. The primary causes are that a cluster node is unable to respond to the heartbeat requests from another node. This can be caused by network failure/interruptions, hardware failures, software failures, or resource starvation (probably the most common cause). There are many causes, but Oracle Clusterware has a very conservative design, to absolutely guarantee the integrity of the cluster and the data.
Each Cluster requires certain network resources. During Oracle Grid Infrastructure installation and configuration, you designate interfaces for use as public, private, or Oracle ASM interfaces. A cluster must have a public network access for clients and a private network for the cluster interconnect. Oracle recommends that you have at least two network interface cards. Oracle also supports the use of link aggregations, bonded, trunked, or teamed networks for improved bandwidth and high availability. For more information about the Network Interface requirements see Network Interface Hardware Minimum Requirements.
The storage network is only needed for network attached storage.
Oracle Clusterware consists of two separate technology stacks: an upper technology stack anchored by the Cluster Ready Services (CRS) daemon (CRSD) and a lower technology stack anchored by the Oracle High Availability Services daemon (OHASD). The Clusterware components vary slightly based on the platform.
You can use the installer,gridSetup.sh or the Oracle Fleet Patching and Provisioning (Oracle FPP) utility rhpctl to add a new node to an existing cluster. For more information, see Adding a Cluster Node on Linux and UNIX Systems.The Cluster Ready Services (CRS) technology stack leverages several processes to manage various services. It manages resource availability, high availability, and cluster operations within Oracle Grid Infrastructure. Cluster Ready Services is the primary program for managing high availability operations in a cluster.
The CRSD manages cluster resources based on the configuration information that is stored in OCR for each resource. This includes start, stop, monitor, and failover operations. The CRSD process generates events when the status of a resource changes. When you have Oracle RAC installed, the CRSD process monitors the Oracle AI Database instance, listener, and so on, and automatically restarts these components when a failure occurs.
Even though CRS and High Availability Services have programs named the same they are distinctly different, such as: oraagent and orarootagent. To know more about the Oracle Clusterware platform, refer to the Overview of Oracle Clusterware Platform-Specific Software Components.
High Availability (HA) events are generated when resources change state within an Oracle Clusterware environment. Oracle Notification Service (ONS) is a facility that creates a bridge with middle-tier servers or applications to transport these events to application logic for handling or reaction. ONS is part of a larger framework known as Fast Application Notification (FAN). With FAN, applications use these events to achieve very fast detection of failures and re-balancing of connection pools, following failures and recovery. When FAN is used with an Oracle AI Database, the Advanced Queuing (AQ) feature allows HA events to be received by external applications such as .NET clients. The easiest way to receive all the benefits of FAN, with no effort, is to use a client that is integrated with FAN, such as:
Note: Not all these applications can receive all types of FAN events and can use Oracle Notification Service.
All Oracle ASM installations can be configured to serve Oracle ASM Flex Clients.
On the left, Server 1 is the traditional Standalone configuration with the ASM instance local to the database instance. In the past this was the only configuration available. Each node had to have an Oracle ASM instance, if Oracle ASM was used to provide shared storage.
Notice that Server 2, is an Oracle ASM client node. It uses the Oracle ASM service on Server 3, another node in the cluster, to access the metadata, and then does direct access to Oracle ASM disk storage for the data blocks.
To the right, is Server 5. It has no direct access to the Oracle ASM disk storage, but gets all the data through the IO Server (IOS) on Server 4. The Oracle IO Server was introduced in Oracle Grid Infrastructure version 12.2. IOS enables you to configure client clusters on such nodes. On the storage cluster, clients send their IO requests to network ports opened by an IOServer. The IOServer instance receives data packets from the client and performs the appropriate IO to Oracle ASM disks similar to any other database client. On the client side, databases can use dNFS to communicate with an IOServer instance. However, there is no client side configuration so you are not required to provide a server IP address or any additional configuration information. On servers and clusters that are configured to access Oracle ASM files through IOServer, the discovery of the Oracle IOS instance occurs automatically.
Each Oracle ASM instance in a cluster has access to the Oracle ASM disk storage in that cluster. Oracle ASM disks are shared disks attached to the nodes of the cluster, in possibly varying ways, as shown in the graphic. Oracle ASM manages Disk Groups rather than individual disks. The Oracle ASM utilities allow you to add disks, partitions, logical volumes or Network attached files (NFS) to a disk group.
Every time Oracle ASM or a database starts, a shared memory area called the System Global Area (SGA) is allocated and the Oracle ASM background processes are started. However, because Oracle ASM performs fewer tasks than a database, an Oracle ASM SGA is much smaller than the database SGA. The combination of background processes and the SGA is called an Oracle ASM instance. The instance represents the CPU and RAM components of a running Oracle ASM environment.
The SGA in an Oracle ASM instance is different in memory allocation and usage than the SGA in a database instance. The SGA in the Oracle ASM instance is divided into four primary areas as follows:
The minimum recommended amount of memory for an Oracle ASM instance is 256 MB. Automatic memory management is enabled by default on an Oracle ASM instance and will dynamically tune the sizes of the individual SGA memory components. The amount of memory that is needed for an Oracle ASM instance will depend on the amount of disk space that is being managed by Oracle ASM.
The second part of the Oracle ASM instance is the background processes. An Oracle ASM instance can have many background processes; not all are always present. Because the Oracle ASM instance shares the same code base as an Oracle AI Database instance, all the required background processes of a database instance will exist in the Oracle ASM instance. There are required background processes and optional background processes. Some of these processes may include the following:
The preceding list of processes is not complete. There can be hundreds of database instance background processes running depending on the database options and configuration of the instance. For the Oracle ASM instance, these processes will not always perform the same tasks as they would in a database instance. For example, the LGWR process in a database instance copies change vectors from the log buffer section of the SGA to the online redo logs on disk. The Oracle ASM instance does not contain a log buffer in its SGA, nor does it use online redo logs. The LGWR process in an Oracle ASM instance copies logging information to an Oracle ASM disk group.
If Oracle ASM is clustered, additional processes related to cluster management will be running in the Oracle ASM instance. Some of these processes include the following:
Additional processes are started when ADVM volumes are configured.
The Oracle ASM instance uses dedicated background processes for much of its functionality.
Oracle ASM and Oracle ACFS both use Oracle ASM disk storage. Oracle ASM presents disk groups to the database as mounted storage. Oracle ACSF presents Oracle ADVM volumes to the operating system as mount points.
Oracle ASM disk storage has two organizations: physical and logical. The physical is Oracle ASM disks. Oracle ASM disks are physical disk partitions that are assigned to Oracle ASM. These may be full disks or disk partitions. Each disk group has two or more failure groups, except when redundancy is set to EXTERNAL.
The unit of organization of Oracle ASM disk storage is a disk group. Each disk group has two or more Oracle ASM disks assigned to a disk group. Each Oracle ASM disk can only be assigned to one disk group. Disk groups have properties that determine their behavior, such as redundancy and failure groups.
You may set the redundancy property of the disk group to EXTERNAL (no-mirroring), NORMAL (two-way mirroring), HIGH (three-way mirroring), EXTENDED, or FLEX. See Creating a Disk Group for more information about Redundancy properties. When Redundancy is set to FLEX, the mirroring is set for the file group rather than the disk group.
In Oracle ASM, the disk group is used to manage several storage properties such as: redundancy, access control, and allowed repair time before drop. Physical disks are assigned to disk groups. Disk groups provide the functionality of RAID with much more flexibility. Disk groups have unique names in the Oracle ASM instance. All disks in a disk group must be of equal size. A disk can belong to one and only one disk group.
You specify redundancy property for each disk group. The possible settings are EXTERNAL, NORMAL, HIGH, FLEX and EXTENDED. When you specify EXTERNAL redundancy, Oracle ASM does not provide mirroring. If you specify NORMAL redundancy Oracle ASM provides two-way mirroring by default for most files. When HIGH redundancy is specified, Oracle ASM provides three-way mirroring. When you specify FLEX or EXTENDED redundancy the disk group redundancy is overridden by the file group redundancy property. EXTENDED redundancy provides two-way mirroring by default within a site, and assumes that you have a vendor supplied disk mirroring solution between sites.
By default each disk is in a different failure group, optionally disks may be grouped into failure groups. Normal redundancy requires a minimum of two disks or two failure groups. High redundancy requires a minimum of three disks or failure groups. A disk group will have a default redundancy that can be overridden at the file or file group level, if there are sufficient failure groups. The example shows only two failure groups so high redundancy is not possible. Depending on the redundancy setting the blocks of a file placed in the disk group or file group will be scattered across the required number of physical disks.
File Groups are optional, but provide more flexible storage management when multiple database, PDBs, or clusters are using the same Oracle ASM storage set. To use Oracle ASM File Groups, set the Disk group redundancy to FLEX or EXTENDED. File Groups allow you to control redundancy, and access by database to specified files within a disk group.
The example illustrates file groups in a multitenant environment. Each PDB has a file group, the file group belongs to a quota group, and resides in a disk group. Note that the file group name and the quota group name is unique in a disk group. The same name may be used in different disk groups. You can use this for easy identification. A PDB may have only one file group in a disk group. A quota group may contain multiple file groups. The file groups and quota group may be administered with commands in SQL*Plus or ASMCMD.
Oracle Advanced Cluster File System (Oracle ACFS) is POSIX Compliant storage management technology, that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support all types of files. It supports Oracle AI Database files and application files, including install files, database data files, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and all other general-purpose application file data. Oracle ACFS presents a mountable file system to any cluster node that can access Oracle ASM.
Oracle Advanced Cluster File System (Oracle ACFS) communicates with Oracle ASM through the Oracle ADVM interface. With the addition of the Oracle ADVM, Oracle ASM becomes a complete storage solution of user data for both database and non-database file needs. I/O requests to Oracle ACFS are passed through Oracle ADVM and the associated IO Drivers. The Metadata required to access the Oracle ASM disk groups is provided through the Oracle ASM Proxy, but the Oracle ADVM drivers access the Oracle ASM blocks directly.
Note: Dynamic volumes supersede traditional device partitioning. Each volume is individually named and may be configured for a single file system. Oracle ADVM volumes may be created on demand from Oracle ASM disk group storage and dynamically resized as required. These attributes make Oracle ADVM volumes far more flexible than physical devices and associated partitioning schemes.
Oracle Real Application Clusters (Oracle RAC) supports shared wallet for Transparent Data Encryption (TDE). You can store the shared wallet on a shared storage, either Oracle ASM or Oracle ACFS. A deployment with a single wallet on a shared disk requires no additional configuration to use Transparent Data Encryption. The wallet must reside in the directory specified by the setting for the WALLET_ROOT initialization parameter, which supersedes the settings in sqlnet.ora.
This diagram describes the steps to configure a shared wallet for an Oracle RAC database:
WALLET_ROOT initialization parameterWALLET_ROOT initialization parameter value to the directory where the wallet is stored on the shared storage. The recommended default method is to store the shared wallet on an Oracle ASM disk group and set the WALLET_ROOT initialization parameter to +DATA/db_unique_name.TDE_CONFIGURATION initialization parameterTDE_CONFIGURATION to specify a software keystore. You can also change the TDE_CONFIGURATION initialization parameter using the init.ora file on all RAC nodes by adding TDE_CONFIGURATION="KEYSTORE_CONFIGURATION=FILE".The local copies of the wallet need not be synchronized for the duration of Transparent Data Encryption usage until the server key is re-keyed through the ADMINISTER KEY MANAGEMENT statement.
Oracle Fleet Patching and Provisioning (Oracle FPP) is a service in Oracle Grid Infrastructure. Oracle FPP is a software lifecycle management method for provisioning and maintaining Oracle homes. It enhances the process of applying patches and provisioning databases in Oracle Grid Infrastructure environments. Oracle FPP enables centralized management of patching and provisioning tasks across a fleet of Oracle AI Databases. It is designed to meet the needs of modern enterprises by automating repetitive tasks and ensuring the timely application of patches across database fleets.
Oracle FPP enables mass deployment and maintenance of standard operating environments for databases, clusters, and user-defined software types. With Oracle Fleet Patching and Provisioning, you can also install clusters and provision, patch, scale, and upgrade Oracle Grid Infrastructure and Oracle AI Database. Additionally, you can provision applications and middleware.
An Oracle Fleet Patching and Provisioning (Oracle FPP) server always has Oracle Grid Infrastructure software installed. It is highly recommended that you also configure Oracle ASM with Oracle Advanced Cluster File System (Oracle ACFS) to take advantage of the ability to snapshot and store deltas of gold images.
Unmanaged Homes are software homes, for example, Database Home that was not provisioned by Oracle FPP. This graphic assumes that the Oracle FPP Server has already been provisioned. The Oracle FPP workflow in this case is as follows:
rhpctl import command for the first gold image. You may run this command from the Oracle FPP Server or Oracle FPP Client. You can create gold images from either a source home, or from a working copy using the rhpctl add image command.rhpctl add client. The client is required to have the OS installed, and cluster ready.rhpctl add workingcopy on the server, local file system, on the Oracle FPP Client, or in a NFS mount point. This command creates a software home for the client on the client storage as specified in the command. Creating the home on the server requires that the client is using Oracle ACFS with access to the same storage cluster as the server. This operation can be on an unmanaged or managed target.Adopting a structured workflow and leveraging automation tools are essential for successful implementation and maintenance of Oracle FPP processes.