Exadata Database Machine Technical Architecture

Oracle logo

December 2024

Copyright © 2024 Oracle and/or its affiliates

Exadata Rack Overview

Exadata Database Machine X11M (Single Rack) Storage Servers Storage Servers Database Servers Database Servers Network Switches Spine Switch (Optional) Up to 19 total database and storage servers From 2 to 15 database servers To connect multiple racks From 3 to 17 storage servers

Notes

Oracle Exadata Database Machine features scale-out industry-standard database servers, scale-out intelligent storage servers, and high-speed internal RDMA Network Fabric that connects the database and storage servers.

In a single rack, you can start with a base configuration containing two database servers and three storage servers and expand to an elastic configuration with up to 19 database and storage servers in total.

Exadata Database Machine also includes network switches to connect the database servers to the storage servers, and you can add an optional spine switch to connect multiple racks.

Note: All specifications are for Exadata X11M racks. See the related resources for more details about Exadata X11M and information about prior models.

Related Resources

Exadata Networking

Exadata Database Server Additional Network (BONDETH1..) n Client Network (BONDETH0) ILOM Admin Exadata Storage Server ILOM Admin RDMA Network Fabric Switch RDMA Network Fabric Switch PDU PDU Management Switch Additional Network(s) Client Network Administration Network

Notes

Oracle Exadata Database Machine includes equipment to connect the system to your network. The network connections allow clients to connect to the database servers and also enable remote system administration.

In a single-rack configuration, Oracle Exadata Database Machine includes the following networked components:

Exadata Database Machine provides the following networks and interfaces:

Related Resources

Connecting Multiple Exadata Racks

Rack 2 Rack 1 7 7 7 7 7 7 7 7 Lower Leaf Switch Upper Leaf Switch Upper Leaf Switch Lower Leaf Switch Spine Switch Spine Switch Database Server n Database Server 1 Database Server n Database Server 1 Storage Server m Storage Server 1 Storage Server m Storage Server 1

Notes

You can connect up to 14 Exadata X11M racks together before external RDMA Network Fabric switches are required.

The diagram shows the RDMA Network Fabric architecture for two interconnected X11M racks. Each rack illustration shows two database servers (1 and n) and two storage servers (1 and m) representing all the database and storage servers in the rack. 

Each rack has one spine switch and two leaf switches. The connections between the leaf switches are removed and replaced by connections between every spine switch and every leaf switch. As shown in the diagram, when connecting two racks, each spine switch has seven connections to each leaf switch. All database and storage servers connect to both leaf switches, the same as in a single rack.

Related Resources

Database Server Deployment Overview

Exadata Database Machine Database Server Hypervisor (Optional) VM Guests Users (role-separated): root, oracle, grid Oracle Database and Oracle Grid Infrastructure Client Network (bonded) Additional Networks (bonded) Database Server Hypervisor (Optional) VM Guests Users (role-separated): root, oracle, grid Oracle Database and Oracle Grid Infrastructure Client Network (bonded) Additional Networks (bonded) RDMA Network Fabric RDMA Network Fabric Ports (active bonding) RDMA Network Fabric Ports (active bonding) To External Clients Exadata System Software Exadata System Software Storage Server Storage Server Storage Server

Notes

When deploying Oracle Exadata Database Machine, you can choose between the following database server deployment options:

Regardless of the chosen deployment option:

Related Resources

Storage Server Types

Extreme Flash (EF)
Storage Server Memory Performance-Optimized Flash Capacity-Optimized Flash High Capacity (HC)
Storage Server Memory Performance-Optimized Flash Hard Disk Drives (HDDs)

Notes

When configuring Oracle Exadata Database Machine, you can choose High Capacity (HC) or Extreme Flash (EF) storage servers. HC storage servers contain high-performance flash memory and hard disk drives (HDDs). EF storage servers have an all-flash configuration.

Exadata X11M HC and EF storage server models are also equipped with additional memory (DDR5 DRAM) for Exadata RDMA Memory Cache (XRMEM cache), which supports high-performance data access using Remote Direct Memory Access (RDMA).

Note: All specifications are for Exadata X11M storage servers. See the related resources for more details about Exadata X11M and information about prior models.

Related Resources

High Capacity (HC) Storage Servers

Exadata Database Machine Storage Server
(HC with XRMEM) Performance-Optimized Flash Flash Cache Flash
Log Memory XRMEM Cache Storage Server
(HC with XRMEM) Performance-Optimized Flash Flash Cache Flash
Log Memory XRMEM Cache Storage Server
(HC with XRMEM) Performance-Optimized Flash Flash Cache Flash
Log Memory XRMEM Cache Database Server Database Server RDMA Network Fabric Exadata System Software Exadata System Software Exadata System Software HDDs HDDs Hard Disk Drives (HDDs) Exascale Storage Pool and/or ASM Disk Groups

Notes

The Exadata X11M system family contains the following High Capacity (HC) storage server offerings:

Note: All specifications are for Exadata X11M storage servers. See the related resources for more details and information about other models.

Each storage server runs Oracle Exadata System Software to process data at the storage level and pass on only what is needed to the database servers.

On HC storage servers, the flash devices primarily support Exadata Smart Flash Cache, which automatically caches frequently used data in high-performance flash memory. Also, Exadata Smart Flash Log uses a small portion of flash memory as temporary storage to reduce latency and increase throughput for redo log writes.

Starting with Exadata System Software release 24.1, Oracle Exadata Exascale transforms Exadata storage management by decoupling Oracle Database and GI clusters from the underlying Exadata storage servers. Exascale software services manage pools of storage that span the fleet of Exadata storage servers and service multiple users and database server clusters.

Related Resources

Extreme Flash (EF) Storage Servers

Exadata Database Machine Storage Server (EF with XRMEM) Performance-Optimized Flash Flash
Log Flash Cache Memory XRMEM Cache Storage Server
(EF with XRMEM) Performance-Optimized Flash Flash
Log Flash Cache Memory XRMEM Cache Storage Server
(EF with XRMEM) Performance-Optimized Flash Flash
Log Flash Cache Memory XRMEM Cache Database Server Database Server RDMA Network Fabric Exadata System Software Exadata System Software Exadata System Software Capacity-Optimized Flash Capacity-Optimized Flash Capacity-Optimized Flash Exascale Storage Pool and/or ASM Disk Groups

Notes

Exadata Storage Server X11M Extreme Flash (EF) is the premium all-flash extreme-performance Exadata storage server offering. Each Exadata X11M EF storage server includes the following hardware components:

Note: All specifications are for Exadata X11M storage servers. See the related resources for more details and information about other models.

Each storage server runs Oracle Exadata System Software to process data at the storage level and pass on only what is needed to the database servers.

Like HC storage servers, the 6.8 TB performance-optimized flash devices on EF storage servers primarily support Exadata Smart Flash Cache, which automatically caches frequently used data in high-performance flash memory. Likewise, Exadata Smart Flash Log uses a small portion of flash memory as temporary storage to reduce latency and increase throughput for redo log writes. However, unlike HC storage servers, the 30.72 TB capacity-optimized flash devices provide data storage with much lower latency than hard disk drives (HDDs).

Starting with Exadata System Software release 24.1, Oracle Exadata Exascale transforms Exadata storage management by decoupling Oracle Database and GI clusters from the underlying Exadata storage servers. Exascale software services manage pools of storage that span the fleet of Exadata storage servers and service multiple users and database server clusters.

Related Resources

Exadata System Software Overview

Exadata Database Machine Database Server Database Server Storage Server Storage Server Storage Server SSH Administrator (CellCLI/ dcli/ExaCLI/ exadcli/ ESCLI) SSH Administrator (DBMCLI/ dcli/ExaCLI/ exadcli/ ESCLI) RDMA Network Fabric Data Storage (HDD or Capacity-Optimized Flash) Data Storage (HDD or Capacity-Optimized Flash) Data Storage (HDD or Capacity-Optimized Flash) Performance-Optimized Flash Flash Cache Flash Log Memory XRMEM Cache Exadata System Software CellCLI MS Exascale Storage Services RS CELLSRV IORM Performance-Optimized Flash Flash Cache Flash Log Memory XRMEM Cache Exadata System Software CellCLI MS Exascale Storage Services RS CELLSRV IORM Performance-Optimized Flash Flash Cache Flash Log Memory XRMEM Cache Exadata System Software CellCLI MS Exascale Storage Services RS CELLSRV IORM Oracle Grid Infrastructure ASM Instance Exadata System Software DBMCLI MS Exascale Client Services Oracle Database Instance DBRM Oracle Grid Infrastructure ASM Instance Exadata System Software DBMCLI MS Exascale Client Services Oracle Database Instance DBRM

Notes

Oracle Exadata System Software provides database-aware storage services, such as the ability to offload SQL and other database processing from the database server. The database and storage servers both contain components of the Exadata System Software.

Starting with Exadata System Software release 24.1, Oracle Exadata Exascale transforms Exadata storage management by decoupling Oracle Database and GI clusters from the underlying Exadata storage servers. Exascale software services manage pools of storage that span the fleet of Exadata storage servers and service multiple users and database server clusters. With the introduction of Exascale, you can choose from the following storage configuration options:

Each database server includes the following software components:

Each storage server contains data storage hardware and Exadata System Software to manage the data. The software includes the following components:

Administrators manage the database and storage servers using secure network connections over the administration network. In addition to CellCLI and DBMCLI, administrators can use the following command-line interfaces:

Note: This slide lists the major Exadata System Software components. See the related resources for more information.

Related Resources

Exadata Database Server Software with Exascale

Database Server Oracle Grid Infrastructure Exadata System Software Exascale ESCLI EDV BSW EGS ESNP RS MS DBMCLI libcell Oracle Database Instance EDSB EGSB DBRM Exascale Mapping Table

Notes

With the introduction of Exascale, some new software is located on the Exadata database servers.

From an end-user perspective, Oracle Database functionality remains essentially the same. However, the database kernel is modified internally to provide seamless support for Exascale. Instead of using a separate ASM instance, databases on Exascale contain a mapping table in the SGA. This table is a relatively small directory that enables the database to locate the appropriate storage server for any given data. The database instance also contains two new background processes (EGSB and EDSB), which maintain instance-level metadata about the Exascale cluster (otherwise known as Exascale global services or EGS) and Exascale vaults (otherwise known as Exascale data stores or EDS). With Exascale, it is important to note that database clients direct I/O to the appropriate Exadata storage server, not through EGSB or EDSB.

On each Exadata database server, the Exadata System Software also contains new software components, including:

Oracle Grid Infrastructure continues to provide cluster services for Exadata databases. However, databases that use Exascale storage do not require an ASM instance.

Related Resources

Exadata Storage Server Software with Exascale

Storage Server Exadata System Software Exascale IFD BSW BSM EDS SYSEDS USREDS ERS EGS ESCLI RS MS CELLSRV CellCLI IORM Performance-Optimized Flash Memory Flash Cache Flash Log XRMEM Cache Data Storage (HDD or Capacity-Optimized Flash)

Notes

On each Exadata storage server, the Exadata System Software contains new Exascale software components, including:

Apart from EGS, which always runs in a cluster of five service instances, and IFD, which runs on every Exascale storage server. The other Exascale services (ERS, SYSEDS, USREDS, BSM, and BSW) all run multiple instances spread across the available storage servers to provide high availability and share the workload. The exact placement of these Exascale service instances depends on the number of storage servers in the Exascale cluster. Consequently, on a system with only a few storage servers, each will host many different service instances. However, on a system with many storage servers, each might host only a few service instances (possibly none).

Exascale works in conjunction with, and relies on, the core Exadata cell services. Specifically, Exascale requires running instances of Cell Server (CELLSRV), Management Server (MS), and Restart Server (RS) on every storage server.

Related Resources

Device-Level Storage Objects

Exadata Storage Server Physical Disk (HDD/Flash) LUN Cell Disk Grid Disk Available
to ASM Pool Disk Available
to Exascale

Notes

Every Exadata storage server includes multiple physical disks, which can be hard disk drives (HDDs) or flash devices.

Each physical disk has a logical representation in the operating system (OS) known as a Logical Unit Number (LUN). Typically, there is a one-to-one relationship between physical disks and LUNs on all Exadata storage server models. However, on Exadata Extreme Flash (EF) storage servers with four 30.72 TB capacity-optimized flash devices, each capacity-optimized flash devices is configured with 2 LUNs, resulting in 8 LUNs on each storage server.

A cell disk reserves the space on a LUN for use by Exadata System Software. A LUN may contain only one cell disk.

With the introduction of Exascale, storage is physically organized in Exascale storage pools, each containing numerous pool disks. While one Exadata cell disk can accommodate multiple Exascale pool disks (as indicated in the diagram), this is usually unnecessary because Exascale provides additional facilities that securely share storage pool resources amongst numerous tenants.

On systems that also use Oracle ASM, you can continue to define multiple grid disks on the available space in each cell disk.

Related Resources

Exadata Storage Structures

Exadata Database Machine Database Server Database Server RDMA Network Fabric Storage Server Storage Server Storage Server Pool Disks (STORAGEPOOL1) Exascale Storage Pool Grid Disks ASM Disk Group (+RECO) ASM Disk Group (+DATA) Vault (@VAULT2) SALES DB Data Files Recovery Files Vault (@VAULT1) FIN DB HR DB

Notes

With the introduction of Exascale, storage is physically organized in Exascale storage pools, each containing numerous pool disks.

An Exascale vault is a logical storage container that uses the physical resources provided by Exascale storage pools. By default, a vault can use all the associated storage pool resources. However, an Exascale administrator can limit the amount of space, I/O resources (I/Os per second, or IOPS), and cache resources associated with each vault.

To an end-user and Oracle Database, a vault appears like a top-level directory that contains files. Referencing files on Exascale is essentially the same as using an ASM disk group, except that Exascale uses the convention of beginning vault names with the ampersand (@) character (for example, @VAULT1) instead of the plus (+) character (for example, +DATA).

However, Exascale vaults are much more sophisticated than ASM disk groups.

Exascale vaults facilitate strict data separation, ensuring that data is isolated to specific users and separated from other data and users. A vault, and its contents, are inaccessible to users without the appropriate privileges. For example, without the correct entitlements, users of one vault cannot see another vault, even though data from both vaults is striped across the same underlying storage pool (as illustrated in the diagram).

Furthermore, Exascale inherently distinguishes between various file types and automatically places data files and associated recovery files on separate pool disks. This enables users and databases to maintain all files within one vault instead of different disk groups for data and recovery files.

On systems that also use Oracle ASM, you can continue to define and use ASM disk groups, which reside alongside the Exascale storage pools.

Related Resources