1.2.1 Exascale Services

Exascale system components are implemented using a series of clustered software services. The core Exascale storage services predominantly run on the Exadata storage servers.

Exascale contains the following storage services:

  • Cluster Services

    Exascale cluster services, also known as Exascale global services (EGS), provide the core foundation for the Exascale system. EGS primarily manages the storage allocated to Exascale storage pools. It also manages storage cluster membership, provides security and identity services for storage servers and Exascale clients, and monitors the other Exascale services. Exascale cluster services use the Raft consensus algorithm.

    For high availability, every Exascale cluster contains five EGS instances.

    For Exascale clusters with five or more Exadata storage servers, one EGS instance runs on each of the first five storage servers.

    For Exascale configurations with fewer than five storage servers, one EGS instance runs on each Exadata storage server, and the remaining EGS instances run on the Exadata compute nodes to make up the required total of five. In a bare-metal configuration, EGS compute node instances run on the server operating system. In a configuration with compute nodes running on virtual machines (VMs), EGS compute node instances run in the hypervisor.

  • Control Services

    Exascale control services, also known as Exascale RESTful Services (ERS), provide a management endpoint for Exascale management operations. All Exascale management operations come through ERS. But, no file I/O operations come through ERS.

    ERS service instances are deployed using front-end and back-end server processes. The front-end ERS processes provide a highly available client endpoint with load-balancing capabilities. The back-end ERS processes work with other software services to process requests and reply back to the client.

    Multiple ERS instances provide high availability and share the Exascale management workload. Typically, five ERS instances are distributed across the Exadata storage servers. However, for configurations with fewer than five storage servers, one ERS instance usually runs on each Exadata storage server.

    The Exascale Command Line (ESCLI) utility provides a simple command-line interface to perform Exascale monitoring and management functions. ESCLI commands are translated into ERS calls and run through ERS. ESCLI works in conjunction with the Exadata cell command-line interface (CELLCLI) and does not replace it.

  • Exascale Vault Manager Services

    Exascale vault manager, also known as Exascale data services (EDS), is the collective name for the Exascale software services that manage file and vault metadata:

    • System Vault Manager

      The system vault manager service (SYSEDS) serves and manages the metadata for Exascale vaults. This metadata includes vault-level access control lists (ACLs) and attributes.

      SYSEDS is a lightweight process, so one instance can usually service the load for an entire Exascale cluster. However, to ensure high availability, five SYSEDS instances are typically distributed across the Exadata storage servers. For configurations with fewer than five storage servers, one SYSEDS instance usually runs on each Exadata storage server.

    • User Vault Manager

      The user vault manager service (USREDS) serves and manages the metadata for files inside the Exascale vaults. This metadata includes file-level access control lists (ACLs) and attributes, along with metadata that defines clones and snapshots. All file control operations, such as open and close, are serviced by the user vault manager service.

      Multiple USREDS instances provide high availability and share the user workload. Typically, five USREDS instances are distributed across the Exadata storage servers. However, for configurations with fewer than five storage servers, one USREDS instance usually runs on each Exadata storage server.

  • Block Store Manager

    The block store manager service (BSM) serves and manages the metadata for Exascale block storage. All block store management operations are serviced by BSM. These block store management operations include creating a volume, attaching a volume to an iSCSI initiator, creating a volume snapshot, and so on. BSM maintains the availability of the block store virtual IP (VIP) addresses that are used by iSCSI targets. It also coordinates the block store worker processes.

    To provide high availability, five BSM instances are typically distributed across the Exadata storage servers. However, for configurations with fewer than five storage servers, one BSM instance usually runs on each Exadata storage server.

  • Block Store Worker

    The block store worker service (BSW) primarily services requests from block store clients and performs the resulting storage server I/O. It plays a role in clone and snapshot creation operations and is also responsible for performing volume backup and restore operations.

    To provide high availability and share the user workload, five BSW instances are typically distributed across the Exadata storage servers. However, for configurations with fewer than five storage servers, one BSW instance usually runs on each Exadata storage server.

    Optionally, BSW can also run on the Exadata compute nodes. This location enables the BSW instance to access the Exadata client network, which may be used to run volume backups to Oracle Cloud Infrastructure (OCI) object storage and to service iSCSI initiators external to Exadata.

    In a bare-metal configuration, BSW compute node instances run on the server operating system. In a configuration with compute nodes running on virtual machines (VMs), BSW compute node instances are typically run inside a dedicated guest VM.

  • Instance Failure Detection

    The instance failure detection (IFD) service is a dedicated lightweight service that quickly detects and responds to any storage server failure. IFD automatically runs on every storage server that is associated with the Exascale cluster.

  • Exadata Cell Services

    Exascale works in conjunction with, and relies on, the core Exadata cell services. Specifically, Exascale requires running instances of Cell Server (CELLSRV), Management Server (MS), and Restart Server (RS) on every storage server. Also, Exascale requires running instances of Management Server (MS) and Restart Server (RS) on every compute server that is associated with the Exascale cluster.

In addition to the Exascale storage services, the following client-side services provide specific support for various Exascale functions on the Exadata compute nodes:

  • Exascale Node Proxy

    The Exascale node proxy (ESNP) service maintains information about the current state of the Exascale cluster, which it provides to local Oracle Grid Infrastructure and Oracle Database processes.

    ESNP is a background process that runs on each Exadata compute node. In a bare-metal configuration, ESNP runs on the server operating system. In a configuration with compute nodes running on virtual machines (VMs), ESNP runs inside the guest VMs.

  • Exascale Direct Volume

    Exascale Direct Volume (EDV) is the default and recommended volume attachment mechanism within the Exadata RDMA Network Fabric.

    The EDV service exposes Exascale volumes as EDV devices on Exadata compute nodes and services the I/O on each EDV device. EDV-managed storage can be used as raw block devices or to support various file systems, including Oracle Advanced Cluster File System (ACFS).

    The EDV service is required on all Exadata compute nodes where you want to use EDV devices. In a bare-metal configuration, the EDV service runs on the server operating system. In a configuration with compute nodes running on virtual machines (VMs), the EDV service runs in the KVM host to provide the hypervisor access to VM image files based on EDV-managed storage. The EDV service also runs in each guest VM to enable direct access to EDV-managed storage from inside the VM.