Oracle Real Application Clusters 26ai Technical Architecture

Oracle logo

October 2025

Copyright © 2020, 2025 Oracle and/or its affiliates

Oracle AI Database Configuration

Oracle RAC database Server n Server 2 Server 1 Single instance database Server Instance 1 Instance n Instance 2 Database instance Processes and SGA Processes and SGA Storage Database files Storage Database files Processes and SGA Processes and SGA

Notes

Oracle AI Database can be configured in two basic ways: as a single instance (SI) non-clustered database and an Oracle Real Application Clusters (Oracle RAC) database.

A single instance database has the Oracle software, database files, and processes all on one server. It has a one-to-one relationship between the instance and the database files. It is the most basic deployment option of the Oracle AI Database.

Oracle RAC environments, however, have a one-to-many relationship between the database and instances. An Oracle RAC database can have up to 100 instances running on different servers, all of which access one database concurrently. Oracle RAC instances share access to datafiles and control files on a shared storage. Oracle RAC provides unmatched high availability, scalability, and load balancing. This deployment provides advanced clustering capabilities, enabling multiple database instances to work together seamlessly for improved performance and fault tolerance. Oracle RAC cluster is a combination of Oracle Grid Infrastructure (Oracle Clusterware + Oracle ASM) and Oracle Real Application Clusters database.

 

Related Resources

Oracle Real Application Clusters (Oracle RAC) Overview

Oracle RAC cluster Application clients Application clients Application clients Storage Private network Public network Server 1 Server n Server 2

Notes

Oracle Real Application Clusters (Oracle RAC) provides a high availability and scalable database solution. All Oracle Real Application Clusters have certain things in common, each cluster has Oracle Grid Infrastructure installed locally on each server. Oracle Grid Infrastructure also includes Oracle Automatic Storage Management (Oracle ASM) and Oracle Clusterware.

Each Instance in the Oracle RAC accesses the database files stored on shared storage. Each node also runs its own operating system and requires local storage, which is used to store Oracle Grid Infrastructure and Oracle AI Database software. In an Oracle RAC environment, while most files like data files, control files, and generally redo log files are shared across all instances to ensure consistency and availability, certain files can be local to each instance. These files include local temporary files, trace files, instance-specific redo logs, dump files, and optionally the password files. 

Each node needs at least one public network for client connections and a private network for inter-node communication. Each network may use multiple network interface cards (NICs) to increase bandwidth, availability, and fault tolerance. Oracle recommends that each network use multiple network interface cards per node and multiple network switches in each network to avoid a single point of failure. The Oracle RAC option with Oracle AI Database enables you to cluster database instances. Oracle RAC uses Oracle Clusterware for the infrastructure to bind multiple servers so they operate as a single system.

Oracle Clusterware is the only clusterware that you need for most platforms on which Oracle RAC operates.

Related Resources

Oracle RAC Database

Oracle RAC cluster Server 1 Server 2 Server 3 Oracle ASM instance 1 Oracle ASM instance 2 Instance 3 Instance 1 Instance 2 Storage Database files

Notes

An Oracle Real Application Clusters (Oracle RAC) database has multiple database Instances hosted on a RAC Cluster. These instances access a single set of database files on shared storage. Typically, the shared storage is provided by an Oracle ASM instance local to the node hosting the Oracle RAC database instance. Oracle ASM instances can run on a separate physical server from the database servers and a server can access Oracle ASM from another server in the cluster. Every server does not necessarily have a local Oracle ASM instance.

Oracle RAC nodes and instances:

Shared storage is a high-performance storage system that is accessible by all cluster nodes. It stores Oracle RAC database files, including datafiles, redo logs, and control files, ensuring all instances have access to the most recent data.

Related Resources

Oracle RAC Database Instance

Oracle RAC database instances Server 1 Server 2 SGA SGA Shared pool Shared pool PMON SMON LMON LMSx ... Database processes PMON SMON LMON LMSx ... Database processes CRSD RACGIMON OCSSD OPROCD ... Clusterware processes CRSD RACGIMON OCSSD OPROCD ... Clusterware processes In memory In memory Buffer cache Buffer cache ... ... Cache fusion

Notes

Oracle RAC database instance is a set of processes and memory that operates on the database files. Oracle RAC database instance communicates through the database processes and Oracle Clusterware with other Oracle RAC database instances in the same cluster. Oracle RAC database instance has the same memory and background processes as a single instance database, with the addition of Cache Fusion protocol implemented by database processes. Cache Fusion is a disk-less cache coherency mechanism in Oracle RAC that provides copies of blocks directly from a holding instance's memory cache to a requesting instance's memory cache. This provides a logical merging of the System Global Area (SGA) across the database instances in the cluster. Global processes (not all are shown) provide the necessary services to maintain the cache coherency, and recovery in the event of a node failure.

Cache Fusion use the private network (interconnect) for communication between cluster nodes. The private network facilitates communication between Oracle RAC instances. This includes the exchange of information related to data consistency, cache management, and lock management.

Related Resources

Oracle RAC One Node

Oracle RAC one node cluster Server 1 Server 2 Instance Oracle Clusterware Oracle Clusterware Storage Online relocation

Notes

Oracle Real Application Clusters One Node (Oracle RAC One Node) is a single database instance of Oracle Real Application Clusters (Oracle RAC) that runs on one node in a cluster. RAC One Node provides high availability by eliminating the database server as a single point of failure and takes further advantage of clustering to apply rolling patches and database service relocation without incurring significant downtime. Oracle RAC One Node is easily upgradable to a full multi-instance Oracle Real Application Clusters configuration.

The database instance can be relocated either manually or automatically to another node in the cluster while maintaining application continuity. RAC One node instances may co-exist with other Oracle RAC database instances in the same cluster, assuming there are enough servers with sufficient resources to service all the instances. 

The main difference between Oracle RAC One Node and Oracle RAC is that with an Oracle RAC One Node Database there is only one instance running at a time under normal operation. Should this one instance be impacted by planned or unplanned downtime, the instance relocates to another server.

Related Resources

Oracle RAC Database Files

Oracle AI Database Logical and Physical structures SYSTEM Tablespace UNDO Tablespace USERS Tablespace TEMP Tablespace Control file Server parameter file Password file Online redo logs Datafile 1 Datafile 2 Datafile n Tempfile Server 1 Server 2 Server n Instance 1 Instance 2 Instance n

Notes

In an Oracle Real Application Clusters (Oracle RAC) environment, database files are stored and managed across multiple cluster nodes. Every instance can access the data files, including the data files associated with Undo tablespaces and control files. Each Instance has an Undo Tablespace dedicated to it that is stored on the shared storage. All redo log files are accessible to all instances, each redo log file is multiplexed as in a single instance. When using Oracle ASM with normal redundancy each redo log member is mirrored, and a second Multiplexed member is placed in a different disk group. Each instance has at least two redo log groups also called a thread. The Oracle RAC database files include:

A common (default) SPFILE with both the common parameters and instance specific parameters is highly recommended in shared storage. The SPFILE can be stored in Oracle ASM. Some initialization parameters are the same across all instances in a cluster, and some are unique per instance.

In an Oracle RAC environment, while most files like data files, control files, and generally redo log files are shared across all instances to ensure consistency and availability, certain files can be local to each instance. These include local temporary files, trace files, instance-specific redo logs, dump files, and optionally password files. The Database Password file (orapwd*) may be placed in shared storage, or may be on each local node. This file contains the passwords of privileged user and must be available when the database in NOT open.

Related Resources

Oracle RAC Redo and Redo Threads

Instance 1 Instance 2 Storage DATA disk group FRA disk group ARC0 LGWR LGWR ARC0 Redo thread 1 Redo thread 2 Archived redo thread 1 Archived redo thread 2

Notes

A redo thread is a set of redo log files that an Oracle instance uses to log changes made to the database. Each instance's redo thread must contain at least two redo log groups. Each redo log group contain at least two members: a redo log and its mirrored copy. If you create your Oracle RAC database using Oracle DBCA, then your Oracle RAC database automatically implements a configuration that meets the Oracle recommendations. Oracle highly recommends to use Oracle Managed Files (OMF) on Oracle ASM.

In an Oracle RAC database, all the redo log files reside on shared storage. In addition, each instance has access to the redo log files of all the other instances in the cluster. If your Oracle RAC database uses Oracle ASM, then Oracle ASM manages the shared storage for the redo log files and the access to these files.

In an Oracle RAC database, each instance writes and archives the redo log groups in its redo thread in the same manner that single-instance Oracle AI Databases do. However, in recovery mode, the instance performing the recovery can read and process all the redo threads for the database, regardless of which instance generated the redo thread. Being able to read all the redo threads enables a running instance to recover the work completed by one or more failed instances.

For example, assume that you have an Oracle RAC database with two instances, instance A and instance B. If instance A is down, then instance B can read the redo log files for both instance A and B to ensure a successful recovery. Users can continue to access and update the database without waiting for the failed instance to be restarted. 

The diagram assumes the use of Oracle ASM Diskgroups.

Related Resources

Oracle RAC Automatic Undo Management (AUM)

image/svg+xml Server 1 Server 2 Storage RAC01 RAC02 UNDO Tablespace1 UNDO Tablespace2

Notes

Automatic Undo Management (AUM) automatically manages undo segments within a specific undo tablespace that is assigned to an instance. Under normal circumstances, only the instance assigned to the undo tablespace can modify the contents of that tablespace. However, all instances can always read all undo blocks for consistent reads. Also, any instance can update any undo tablespace during transaction recovery, as long as that undo tablespace is not currently used by another instance for undo generation or transaction recovery.

You can dynamically switch undo tablespace assignments by executing the ALTER SYSTEM SET UNDO_TABLESPACE statement. You can run this command from any instance. In the example above, the previously used undo tablespace, unodtbs1, assigned to the RAC01 instance remains assigned to it until RAC01's last active transaction commits. The pending offline tablespace may be unavailable for other instances until all transactions against that tablespace are committed. You cannot simultaneously use Automatic Undo Management (AUM) and manual undo management in an Oracle RAC database. It is highly recommended that you use the AUM mode described on this slide.

Related Resources

Oracle RAC Listener

Cluster resources Server 1 Server 2 Server 3 Public IP Public IP Public IP VIP SCAN IP VIP VIP Local listener SCAN listener SCAN listener SCAN listener Local listener Local listener Public network

Notes

Single Client Access Name (SCAN) is a single name assigned to the cluster. Instead of connecting directly to individual node listeners, clients can connect to a database instance via a SCAN listener using a SCAN VIP. The SCAN listener is responsible for routing connections to your cluster nodes ensuring load balancing. It simplifies drastically your network setup when the number of cluster nodes is important. Typically there are 1 to 3 SCAN listeners per cluster, regardless of the number of nodes. In addition, SCAN VIP addresses can failover between nodes. The Oracle RAC listener is a process that listens for incoming client connection requests and manages the traffic to the database instance. Listeners use node VIP addresses tied to specific nodes to receive connections to that node.

Each database listener registers with the SCAN listener and updates it with the current load metric. The SCAN listener receives requests and routes the requests to the Local listener on the least loaded node. The local listener is using a virtual IP (VIP) address. A VIP is an IP address that is not bound to a specific network interface card. It can be moved between nodes in the cluster and is a resource managed by Oracle Clusterware. VIPs are used to provide faster connection failover and avoid TCP/IP timeout issues when a node fails.

Each node will have a public IP address assigned to the network interface for that particular node.

Related Resources

Oracle RAC Load Balancing

Default for OLTP workloads that looks at the number of sessions, like CLB_LONG does. Oracle RAC database management system provides a high degree of scalability as new instances are added to the cluster. Short Connection load balancing for applications that use run-time load balancing. Long (default) Connection load balancing for applications do not require run-time load balancing. SERVICE_TIME Depends on user calls elapsed time and free CPU. THROUGPUT Depends on number of user calls completed and free CPU. SMART_CONN Load balancing is handled automatically by Oracle RAC. NONE (default) CLB CLB RLB Service

Notes

Oracle Real Application Clusters (Oracle RAC) provides two service-based load balancing options - Connection Load Balancing (CLB) and Runtime Load Balancing (RLB). Load balancing enables fine-grained control over how client connections are distributed among instances within an Oracle RAC cluster.

CLB and RLB are essential Oracle RAC features that offer advanced load balancing capabilities at both connection establishment and runtime levels. They improve system responsiveness, optimize resource utilization, and ensure high availability across the database cluster.

Related Resources

Oracle RAC Clustered Nodes

Oracle RAC cluster Server Server 1 Server 2 Server n DB instance Oracle ASM instance Clusterware processes Storage Shared GI files Operating system Local disk storage Oracle Grid Infrastructure home Oracle AI Database home Public network Private network

Notes

A Clustered Node is a server that is a member or a planned member of a Cluster. There may be one or many nodes in a cluster. Oracle ASM (Automatic Storage Management) is Oracle's integrated storage manager specifically designed for managing database files. It simplifies the storage and management of Oracle AI Database files by abstracting and automating storage management tasks, making it particularly useful in large-scale and high-availability environments like Oracle RAC. Oracle Clusterware is a cluster management software that manages Oracle RAC environments. Its processes maintain node health, enable high availability, and coordinate resource management across the cluster. Together, they ensure Oracle RAC and applications are highly available, resilient, and seamlessly managed in clustered environments.

Each node has local storage. Local storage provides for the OS needs and the Oracle Grid Infrastructure and Oracle RAC software installation. Every clustered node has an instance of Oracle Clusterware running. The node-specific diagnostics and configuration information is stored in the local storage.

A node may have one or more other instances running an Oracle RAC database or an application. It is possible to install the software for these instances on shared storage, but it is recommended that these be installed on local storage to allow rolling upgrades, or relocation. However, using local storage also increases your software maintenance costs, because each Oracle home must be upgraded or patched separately.

Each Node has a network connection to what is called the public network. This network provides the connection applications that use the database or clustered application. Each Node has a private network connection also called the interconnect that provides communication between the nodes of a cluster.

Related Resources

Oracle RAC Cluster Storage Options

Cluster aware storage options Certified cluster file systems Oracle ASM Oracle Advanced Cluster File System (Oracle ACFS) Oracle ACFS remote IBM ACFS remote IBM GPFS (IBM AIX only) OCFS2 (Linux only) NFS

Notes

A cluster requires a shared storage area that is accessible to all the cluster members. The supported types of shared storage depend upon the platform you are using, for example:

The storage is presented to the Operating system as block storage such as a partition, or as a file system such as typical mount point and directory structure as with Oracle ACFS.

Any shared storage must be cluster aware and certified for use with Oracle RAC Database.

Related Resources

Oracle Extended Cluster Overview

Site 1 Site 2 Server 1 Server 1 Server 2 Server 2 Instance 1 Instance 1 Instance 2 Instance 2 Oracle ASM storage Oracle ASM storage Privatenetwork

Notes

An Oracle Extended Cluster consists of nodes that are located in multiple locations called sites. For more information see Oracle Extended Clusters.

When you deploy an Oracle cluster, you can also choose to configure the cluster as an Oracle Extended Cluster. You can extend an Oracle RAC cluster across two, or more, geographically separate sites, each equipped with its own storage. In the event that one of the sites fails, the other site acts as an active standby. The storage is replicated, so that each site may function when the other is unavailable.

Both Oracle ASM and the Oracle AI Database stack, in general, are designed to use enterprise-class shared storage in a data center. Fibre Channel technology, however, enables you to distribute compute and storage resources across two or more data centers, and connect them through Ethernet cables and Fibre Channel, for compute and storage needs, respectively.

You can configure an Oracle Extended Cluster when you install Oracle Grid Infrastructure. You can also do so post installation using the ConvertToExtended script. You manage your Oracle Extended Cluster using CRSCTL.

Oracle recommends that you deploy Oracle Extended Clusters with normal redundancy disk groups. You can assign nodes and failure groups to sites. Sites contain failure groups, and failure groups contain disks. For normal redundancy disk groups, a disk group provides one level of failure protection, and can tolerate the failure of either a site or a failure group.

Note: Recommended distance between sites is less than 100km.

Related Resources

Oracle Clusterware Registry (OCR)

Oracle RAC cluster Server 2 Server 3 Server 1 OCR cache OCR cache OCR cache CRSD process CRSD process CRSD process Client process Client process Storage OCR secondary file OCR primary file

Notes

Oracle Cluster Registry (OCR) stores Cluster configuration information. You can have up to five OCR locations. Each OCR location resides on shared storage that is accessible by all nodes in the cluster. The OCR relies on distributed shared cache architecture for optimizing queries, and cluster-wide atomic updates against the cluster repository. Each node in the cluster maintains an in-memory copy of OCR, along with the CRSD that accesses its OCR cache. Only one of the CRSD processes actually reads from and writes to the OCR file on shared storage. This process refreshes its own local cache, as well as the OCR cache on other nodes in the cluster. For queries against the cluster repository, the OCR clients communicate directly with the local CRS daemon (CRSD) process on the node from which they originate. When clients need to update the OCR, they communicate through their local CRSD process to the CRSD process that is performing input/output (I/O) for writing to the repository on disk.

The main OCR client applications are CRSCTL, OUI, SRVCTL, Enterprise Manager (EM), the Oracle Database Configuration Assistant (Oracle DBCA), the Oracle Database Upgrade Assistant (Oracle DBUA), Oracle Network Configuration Assistant (Oracle NETCA), and the Oracle ASM Configuration Assistant (Oracle ASMCA). Furthermore, OCR maintains dependency and status information for application resources defined within Oracle Clusterware, specifically databases, instances, services, and node applications.

Note: In the diagram in the slide, note that a client process might also exist on server 2 but is not shown for the sake of clarity.

Related Resources

Voting Disk Files

Server 1 Server 2 Server 3 Server 1 Server 2 Server 3 CSS CSS CSS Nodes can see each other by heartbeat sent through the private network and by heartbeat written to the voting disk Voting disk CSS CSS CSS Voting disk Node 3 can no longer communicate through private network. The other nodes no longer see its heartbeats and evict node 3.

Notes

Cluster Synchronization Services (CSS) is the service that determines which nodes in the cluster are available, provides cluster group membership and simple locking services to other processes. CSS typically determines node availability via communication through the private network with a voting file used as a secondary communication mechanism. This is done by sending network heartbeat messages through the private network and writing a disk heartbeat to the voting file, as illustrated by the top graphic in the slide. The voting file resides on a clustered file system that is accessible to all nodes in the cluster. Its primary purpose is to help in situations where the private network communication fails. The voting file is then used to communicate the node state information to the other nodes in the cluster. Without the voting file, it can be difficult for isolated nodes to determine whether it is experiencing a network failure or whether the other nodes are no longer available. It would then be possible for the cluster to enter a state where multiple sub-clusters of nodes would have unsynchronized access to the same database files. The bottom graphic illustrates what happens when Node3 can no longer send network heartbeats to other members of the cluster. When other nodes can no longer see Node3's heartbeats, they decide to evict that node by using the voting disk. When Node3 reads the removal message or "kill block," it generally reboots itself to ensure that all outstanding write I/Os are lost.

A "split brain" condition is generally where one of the nodes cannot talk to other nodes via network but is still able to access voting disk, and can happen for any number of reasons. The primary causes are that a cluster node is unable to respond to the heartbeat requests from another node. This can be caused by network failure/interruptions, hardware failures, software failures, or resource starvation (probably the most common cause). There are many causes, but Oracle Clusterware has a very conservative design, to absolutely guarantee the integrity of the cluster and the data.

Related Resources

Server Network Configuration

Oracle RAC cluster Server 1 Server 2 VIP Public IP VIP Public IP Private IP Private IP Private IP Private IP SCAN listener Local listener Local listener SCAN listener Network switch Storage Private network Storage network Public network

Notes

Each Cluster requires certain network resources. During Oracle Grid Infrastructure installation and configuration, you designate interfaces for use as public, private, or Oracle ASM interfaces. A cluster must have a public network access for clients and a private network for the cluster interconnect. Oracle recommends that you have at least two network interface cards. Oracle also supports the use of link aggregations, bonded, trunked, or teamed networks for improved bandwidth and high availability. For more information about the Network Interface requirements see Network Interface Hardware Minimum Requirements

The storage network is only needed for network attached storage.

Related Resources

Clusterware Processes and Services

Clusterware tools cluvfy crsctl Server 1 Server 2 Oracle ASM, DB, Services, OCR, VIP, ONS, EMD, Listener, GNS, HAVIP, CVU, CDP, 
Oracle ACFS, IOSERVER, MOUNTFS, OVMM, VM, Oracle FPP ... Oracle ASM, DB, Services, OCR, VIP, ONS, EMD, Listener, GNS, HAVIP, CVU, CDP, 
Oracle ACFS, IOSERVER, MOUNTFS, OVMM, VM, Oracle FPP ... Oracle clusterware Cluster ready services Oracle high availability services Oracle clusterware CRSD, RACGIMON, EVMD, OCSSD, OPROCD, OHASD ... 1. Oracle Grid Infrastructure installer 2. Oracle Fleet Patching and Provisioning Global management SRVCTL, DBCA, EM Private network

Notes

Oracle Clusterware consists of two separate technology stacks: an upper technology stack anchored by the Cluster Ready Services (CRS) daemon (CRSD) and a lower technology stack anchored by the Oracle High Availability Services daemon (OHASD). The Clusterware components vary slightly based on the platform.

You can use the installer, gridSetup.sh or the Oracle Fleet Patching and Provisioning (Oracle FPP) utility rhpctl to add a new node to an existing cluster. For more information, see Adding a Cluster Node on Linux and UNIX Systems.

Related Resources

Cluster Ready Services (CRS)

Level 0 Level 1 Level 2 Level 3 Level 4 Network sources ACF registry GNSVIP Diskgroup DB resources Services eONS ONS GNS GSD SCANIP Node VIP SCAN listener Listener INIT OHASD cssdmonitor CRSD orarootagent CRSD oraagent OHASD oraagent OHASD oraclerootagent cssdagent ASM instance Oracle ASM mDNSD GIPCD EVMD GPNPD CRSD CTSSD Diskmon CSSD Resource managed by Cluster Ready Service Process on the Customer Ready Service stack Legend: Process on the High Availability stack GPNPD CTSSD Services

Notes

The Cluster Ready Services (CRS) technology stack leverages several processes to manage various services. It manages resource availability, high availability, and cluster operations within Oracle Grid Infrastructure. Cluster Ready Services is the primary program for managing high availability operations in a cluster.

The CRSD manages cluster resources based on the configuration information that is stored in OCR for each resource. This includes start, stop, monitor, and failover operations. The CRSD process generates events when the status of a resource changes. When you have Oracle RAC installed, the CRSD process monitors the Oracle AI Database instance, listener, and so on, and automatically restarts these components when a failure occurs.

Even though CRS and High Availability Services have programs named the same they are distinctly different, such as: oraagent and orarootagent. To know more about the Oracle Clusterware platform, refer to the Overview of Oracle Clusterware Platform-Specific Software Components.

Related Resources

Oracle Notification Services (ONS)

.NET app ODP.NET C app ONSC API C app OCI API Java app ONS Java API Java app JDBC ONS ONS Cluster Server Proxy app Callout script Callout exec AQ EMC HA events HA events HA events EMD ONS HA events CRS

Notes

High Availability (HA) events are generated when resources change state within an Oracle Clusterware environment. Oracle Notification Service (ONS) is a facility that creates a bridge with middle-tier servers or applications to transport these events to application logic for handling or reaction. ONS is part of a larger framework known as Fast Application Notification (FAN). With FAN, applications use these events to achieve very fast detection of failures and re-balancing of connection pools, following failures and recovery. When FAN is used with an Oracle AI Database, the Advanced Queuing (AQ) feature allows HA events to be received by external applications such as .NET clients. The easiest way to receive all the benefits of FAN, with no effort, is to use a client that is integrated with FAN, such as:

Note: Not all these applications can receive all types of FAN events and can use Oracle Notification Service.

Related Resources

Oracle Automatic Storage Management (Oracle ASM) Configuration

Oracle flex ASM Server 2 Server 1 Server 5 Server 4 Server 3 I/O Server Instance 2 Instance 3 Instance 1 ASM Proxy instance 1 Local Oracle ASM ACFS/ADVM drivers Oracle ASM disk storage Oracle ASM instance 2 Oracle ASM instance 1 Oracle ASM Tools Oracle ASM network Storage network

Notes

All Oracle ASM installations can be configured to serve Oracle ASM Flex Clients.

On the left, Server 1 is the traditional Standalone configuration with the ASM instance local to the database instance. In the past this was the only configuration available. Each node had to have an Oracle ASM instance, if Oracle ASM was used to provide shared storage.

Notice that Server 2, is an Oracle ASM client node. It uses the Oracle ASM service on Server 3, another node in the cluster, to access the metadata, and then does direct access to Oracle ASM disk storage for the data blocks.

To the right, is Server 5. It has no direct access to the Oracle ASM disk storage, but gets all the data through the IO Server (IOS) on Server 4. The Oracle IO Server was introduced in Oracle Grid Infrastructure version 12.2. IOS enables you to configure client clusters on such nodes. On the storage cluster, clients send their IO requests to network ports opened by an IOServer. The IOServer instance receives data packets from the client and performs the appropriate IO to Oracle ASM disks similar to any other database client. On the client side, databases can use dNFS to communicate with an IOServer instance. However, there is no client side configuration so you are not required to provide a server IP address or any additional configuration information. On servers and clusters that are configured to access Oracle ASM files through IOServer, the discovery of the Oracle IOS instance occurs automatically.

Each Oracle ASM instance in a cluster has access to the Oracle ASM disk storage in that cluster. Oracle ASM disks are shared disks attached to the nodes of the cluster, in possibly varying ways, as shown in the graphic. Oracle ASM manages Disk Groups rather than individual disks. The Oracle ASM utilities allow you to add disks, partitions, logical volumes or Network attached files (NFS) to a disk group.

Related Resources

Oracle ASM Instance

Oracle ASM instance System Global Area (SGA) Processes Large pool Shared pool Free Memory Oracle ASM cache RBAL ARBn GMON Onnn PZ9n MARK Other misc. processes

Notes

Every time Oracle ASM or a database starts, a shared memory area called the System Global Area (SGA) is allocated and the Oracle ASM background processes are started. However, because Oracle ASM performs fewer tasks than a database, an Oracle ASM SGA is much smaller than the database SGA. The combination of background processes and the SGA is called an Oracle ASM instance. The instance represents the CPU and RAM components of a running Oracle ASM environment.

The SGA in an Oracle ASM instance is different in memory allocation and usage than the SGA in a database instance. The SGA in the Oracle ASM instance is divided into four primary areas as follows:

The minimum recommended amount of memory for an Oracle ASM instance is 256 MB. Automatic memory management is enabled by default on an Oracle ASM instance and will dynamically tune the sizes of the individual SGA memory components. The amount of memory that is needed for an Oracle ASM instance will depend on the amount of disk space that is being managed by Oracle ASM.

The second part of the Oracle ASM instance is the background processes. An Oracle ASM instance can have many background processes; not all are always present. Because the Oracle ASM instance shares the same code base as an Oracle AI Database instance, all the required background processes of a database instance will exist in the Oracle ASM instance. There are required background processes and optional background processes. Some of these processes may include the following:

The preceding list of processes is not complete. There can be hundreds of database instance background processes running depending on the database options and configuration of the instance. For the Oracle ASM instance, these processes will not always perform the same tasks as they would in a database instance. For example, the LGWR process in a database instance copies change vectors from the log buffer section of the SGA to the online redo logs on disk. The Oracle ASM instance does not contain a log buffer in its SGA, nor does it use online redo logs. The LGWR process in an Oracle ASM instance copies logging information to an Oracle ASM disk group.

If Oracle ASM is clustered, additional processes related to cluster management will be running in the Oracle ASM instance. Some of these processes include the following:

Additional processes are started when ADVM volumes are configured.

The Oracle ASM instance uses dedicated background processes for much of its functionality.

Related Resources

Oracle ASM Disk Storage

General file services Oracle ASM disk storage Database file services Disk group Disk group File group File group File group Files Parameter file (SPFILE) Password file Cluster registry Voting files Oracle ASM Oracle ACFS Oracle ADVM volume Files

Notes

Oracle ASM and Oracle ACFS both use Oracle ASM disk storage. Oracle ASM presents disk groups to the database as mounted storage. Oracle ACSF presents Oracle ADVM volumes to the operating system as mount points.

Oracle ASM disk storage has two organizations: physical and logical. The physical is Oracle ASM disks. Oracle ASM disks are physical disk partitions that are assigned to Oracle ASM. These may be full disks or disk partitions. Each disk group has two or more failure groups, except when redundancy is set to EXTERNAL.

The unit of organization of Oracle ASM disk storage is a disk group. Each disk group has two or more Oracle ASM disks assigned to a disk group. Each Oracle ASM disk can only be assigned to one disk group. Disk groups have properties that determine their behavior, such as redundancy and failure groups.

You may set the redundancy property of the disk group to EXTERNAL (no-mirroring), NORMAL (two-way mirroring), HIGH (three-way mirroring), EXTENDED, or FLEX. See Creating a Disk Group for more information about Redundancy properties. When Redundancy is set to FLEX, the mirroring is set for the file group rather than the disk group.

Related Resources

Oracle ASM Disk Groups

Logical Physical Disk group Disk group Failure group Failure group Failure group Failure group Disk Disk Disk Disk Disk Disk Disk Disk Disk Disk Disk Disk File group File group

Notes

In Oracle ASM, the disk group is used to manage several storage properties such as: redundancy, access control, and allowed repair time before drop. Physical disks are assigned to disk groups. Disk groups provide the functionality of RAID with much more flexibility. Disk groups have unique names in the Oracle ASM instance. All disks in a disk group must be of equal size. A disk can belong to one and only one disk group.

You specify redundancy property for each disk group. The possible settings are EXTERNAL, NORMAL, HIGH, FLEX and EXTENDED. When you specify EXTERNAL redundancy, Oracle ASM does not provide mirroring. If you specify NORMAL redundancy Oracle ASM provides two-way mirroring by default for most files. When HIGH redundancy is specified, Oracle ASM provides three-way mirroring. When you specify FLEX or EXTENDED redundancy the disk group redundancy is overridden by the file group redundancy property. EXTENDED redundancy provides two-way mirroring by default within a site, and assumes that you have a vendor supplied disk mirroring solution between sites.

By default each disk is in a different failure group, optionally disks may be grouped into failure groups. Normal redundancy requires a minimum of two disks or two failure groups. High redundancy requires a minimum of three disks or failure groups. A disk group will have a default redundancy that can be overridden at the file or file group level, if there are sufficient failure groups. The example shows only two failure groups so high redundancy is not possible. Depending on the redundancy setting the blocks of a file placed in the disk group or file group will be scattered across the required number of physical disks.

File Groups are optional, but provide more flexible storage management when multiple database, PDBs, or clusters are using the same Oracle ASM storage set. To use Oracle ASM File Groups, set the Disk group redundancy to FLEX or EXTENDED. File Groups allow you to control redundancy, and access by database to specified files within a disk group.

Related Resources

Oracle ASM File Groups

Disk group DATA Disk group FRA Quota group QG1 Quota group QG2 Quota group QG1 Quota group QG2 File group PDB1 File group PDB2 File group PDB3 File group PDB1 File group PDB2 File group PDB3 File 1 File 3 File 2 File 1 File 3 File 2 File 1 File 3 File 2 File 4 File 6 File 5 File 4 File 6 File 5 File 4 File 6 File 5 PDB1 PDB2 PDB3

Notes

The example illustrates file groups in a multitenant environment. Each PDB has a file group, the file group belongs to a quota group, and resides in a disk group. Note that the file group name and the quota group name is unique in a disk group. The same name may be used in different disk groups. You can use this for easy identification. A PDB may have only one file group in a disk group. A quota group may contain multiple file groups. The file groups and quota group may be administered with commands in SQL*Plus or ASMCMD.

Related Resources

Oracle Advanced Cluster File System (Oracle ACFS)

Oracle RAC cluster Server 1 Server 2 Oracle ACFS file system Oracle ADVM Oracle ASM proxy Oracle ASM instance Shared storage Metadata I/O

Notes

Oracle Advanced Cluster File System (Oracle ACFS) is POSIX Compliant storage management technology, that extends Oracle Automatic Storage Management (Oracle ASM) functionality to support all types of files. It supports Oracle AI Database files and application files, including install files, database data files, database trace files, database alert logs, application reports, BFILEs, and configuration files. Other supported files are video, audio, text, images, engineering drawings, and all other general-purpose application file data. Oracle ACFS presents a mountable file system to any cluster node that can access Oracle ASM.

Oracle Advanced Cluster File System (Oracle ACFS) communicates with Oracle ASM through the Oracle ADVM interface. With the addition of the Oracle ADVM, Oracle ASM becomes a complete storage solution of user data for both database and non-database file needs. I/O requests to Oracle ACFS are passed through Oracle ADVM and the associated IO Drivers. The Metadata required to access the Oracle ASM disk groups is provided through the Oracle ASM Proxy, but the Oracle ADVM drivers access the Oracle ASM blocks directly.

Note: Dynamic volumes supersede traditional device partitioning. Each volume is individually named and may be configured for a single file system. Oracle ADVM volumes may be created on demand from Oracle ASM disk group storage and dynamically resized as required. These attributes make Oracle ADVM volumes far more flexible than physical devices and associated partitioning schemes.

Related Resources

Oracle RAC TDE Keystore

Storage 1 Create keystore 4 (Rolling) Restart 2 ALTER SYSTEM SET TDE_CONFIGURATION =”KEYSTORE_CONFIGURATION=FILE” SCOPE=BOTH SID=’*’; 3 Oracle wallet Oracle AI Database Create isolated PDB Oracle ASM ALTER SYSTEM SET WALLET_ROOT =’+DATA/db_unique_name’ SCOPE=spfile sid=’*’; ALTER SYSTEM SET WALLET_ROOT =’/acfs_root/KEYSTORES/db_unique_name’ scope=spfile sid=’*’; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY “password”; ADMINISTER KEY MANAGEMENT CREATE KEYSTORE IDENTIFIED BY “PDB-keystore-password”; ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY “CBD-keystore-password” WITH BACKUP container=current; ADMINISTER KEY MANAGEMENT SET ENCRYPTION KEY IDENTIFIED BY “PDB-keystore-password” WITH BACKUP; Oracle ACFS

Notes

Oracle Real Application Clusters (Oracle RAC) supports shared wallet for Transparent Data Encryption (TDE). You can store the shared wallet on a shared storage, either Oracle ASM or Oracle ACFS. A deployment with a single wallet on a shared disk requires no additional configuration to use Transparent Data Encryption. The wallet must reside in the directory specified by the setting for the WALLET_ROOT initialization parameter, which supersedes the settings in sqlnet.ora.

This diagram describes the steps to configure a shared wallet for an Oracle RAC database:

  1. Change the WALLET_ROOT initialization parameter
    Set the WALLET_ROOT initialization parameter value to the directory where the wallet is stored on the shared storage. The recommended default method is to store the shared wallet on an Oracle ASM disk group and set the WALLET_ROOT initialization parameter to +DATA/db_unique_name.
  2. (Rolling) Restart all Oracle RAC databases
    Perform a normal or a rolling restart of the database instances to activate the new initialization parameters. Stop and restart each instance individually to avoid a complete outage of your database.
  3. Change the TDE_CONFIGURATION initialization parameter
    Change the initialization parameter TDE_CONFIGURATION to specify a software keystore. You can also change the TDE_CONFIGURATION initialization parameter using the init.ora file on all RAC nodes by adding TDE_CONFIGURATION="KEYSTORE_CONFIGURATION=FILE".
  4. Create a keystore and set an encryption key
    Create a keystore with a password and set the encryption key. You can choose either set of the commands described in the graphic, depending on your database deployment.

The local copies of the wallet need not be synchronized for the duration of Transparent Data Encryption usage until the server key is re-keyed through the ADMINISTER KEY MANAGEMENT statement.

Related Resources

Oracle Fleet Patching and Provisioning

Cluster Cluster Oracle FPP server node Server 1 Server 2 Server 3 Oracle Grid Infrastructure Metadata repository Oracle ACFS Oracle FPP workflow Oracle ASM instance Application IP Storage Storage rhpcl add

Notes

Oracle Fleet Patching and Provisioning (Oracle FPP) is a service in Oracle Grid Infrastructure. Oracle FPP is a software lifecycle management method for provisioning and maintaining Oracle homes. It enhances the process of applying patches and provisioning databases in Oracle Grid Infrastructure environments. Oracle FPP enables centralized management of patching and provisioning tasks across a fleet of Oracle AI Databases. It is designed to meet the needs of modern enterprises by automating repetitive tasks and ensuring the timely application of patches across database fleets.

Oracle FPP enables mass deployment and maintenance of standard operating environments for databases, clusters, and user-defined software types. With Oracle Fleet Patching and Provisioning, you can also install clusters and provision, patch, scale, and upgrade Oracle Grid Infrastructure and Oracle AI Database. Additionally, you can provision applications and middleware.

Related Resources

Oracle Fleet Patching and Provisioning Workflow

Cluster Oracle FPP server node Source Targets Oracle Grid Infrastructure GI home Oracle FPP client Unmanaged or Oracle FPP client Metadata repository Other home Oracle ACFS Oracle ASM instance Grid Naming Service (GNS) DB home Storage Add gold images Add a client Create working copies Patch a target 1 2 3 4

Notes

An Oracle Fleet Patching and Provisioning (Oracle FPP) server always has Oracle Grid Infrastructure software installed. It is highly recommended that you also configure Oracle ASM with Oracle Advanced Cluster File System (Oracle ACFS) to take advantage of the ability to snapshot and store deltas of gold images.

Unmanaged Homes are software homes, for example, Database Home that was not provisioned by Oracle FPP. This graphic assumes that the Oracle FPP Server has already been provisioned. The Oracle FPP workflow in this case is as follows:

  1. Create a gold image from a software Home that was previously installed with Oracle Universal Installer. Use the rhpctl import command for the first gold image. You may run this command from the Oracle FPP Server or Oracle FPP Client. You can create gold images from either a source home, or from a working copy using the rhpctl add image command.
  2. Add a client to the Oracle FPP Server configuration with rhpctl add client. The client is required to have the OS installed, and cluster ready.
  3. Create a working copy of the image with rhpctl add workingcopy on the server, local file system, on the Oracle FPP Client, or in a NFS mount point. This command creates a software home for the client on the client storage as specified in the command. Creating the home on the server requires that the client is using Oracle ACFS with access to the same storage cluster as the server. This operation can be on an unmanaged or managed target.
  4. Patching a target follows a similar process.

Adopting a structured workflow and leveraging automation tools are essential for successful implementation and maintenance of Oracle FPP processes.

Related Resources