Creating Distributed ExaDB-XS Using Custom Create

  1. Provide the following basic information.

    Setting Description and Notes
    Compartment

    Select a compartment to host the Distributed ExaDB-XS resource

    Display name

    Enter a user-friendly description or other information that helps you easily identify the Distributed ExaDB-XS.

    Avoid entering confidential information.

    You can modify this name after resource creation.

    Database name prefix

    This prefix is appended to all of the database names in the configuration for ease of use.

    Database version

    Oracle AI Database release 26ai is supported at this time.

  2. Select your License type.
  3. Optionally, add tags to the Distributed ExaDB-XS resource. These can also be added after creation.

  4. Enter the following information:

    Setting Description
    Data distribution

    Automated - Data is automatically distributed across shards using partitioning by consistent hash. The partitioning algorithm evenly and randomly distributes data across shards.

    User managed - not currently supported

    Replication type

    Raft replication creates replication units consisting of sets of chunks and distributes them automatically among the shards to handle chunk assignment, chunk movement, workload distribution, and balancing upon scaling.

    Replication factor

    In Raft replication, the replication factor is the number of replicas in a replication unit. This number includes the primary (leader) member of the unit and its replicas (followers).

    Shard count

    The number of shards is determined by the replication factor.

    ECPU count

    Enter the number of ECPU cores to enable for each VM cluster. Specify the number of ECPUs as an integer. Available cores are subject to your tenancy’s service limits.

    You must enter a minimum of 8 ECPUs.

    ECPUs are based on the number of cores, elastically allocated, from the shared pool of Exadata database servers and storage servers. Aggregated ECPU consumption on a given cluster is 1.5 times the ECPU count.

    Note that a number of ECPUs are consumed in overhead and are not available to the shards.

    See Oracle Cloud Infrastructure Documentation for more information.

    Vault storage capacity

    GB of storage to allocate to the shard (database)

    You can use same vault for all VM clusters (catalog and shards), as long as you configure a minimum of 500GB in storage capacity for every database in the topology.

    For example, if you have 3 shards and 1 catalog then the total minimum storage needed is 500GB x 4 = 2000GB. In this case you create a single vault with a minimum of 2000GB storage capacity.

    In Shards configuration you can configure shards using the map view or list view.

    • On the map view, select the region where you want the database shards to be deployed, then select Configure Shards to enter the settings.

    • In the list view, the settings are presented in the Create Globally Distributed Exadata Database on Exascale Infrastructure page.

    Shards List

    The shards in the list view and map view are pre-populated with shard names in the home tenancy, and you must edit the shards to add more required information.

    To configure a shard's region placement, availability domain, and subnet:

    • In the list view, select the Edit action in the action menu (…).

    • In the map view, you can select one or more regions, then select View/Edit or Configure Shards.

    If all of your shards will be in the same region, availability domain, and subnet, enable Apply same settings to all shards.

    You can select Add Shard to have up to 10 shards in the list. You can also remove shards, but note that the shard count must be greater than the Raft replication factor.

  5. Configure the shard catalog in Catalog configuration.

    You can choose to use the same configuration that is applied to the shards using Same as Shard's configuration, or make modifications that apply only to the catalog database.

    Select Edit in the actions menu (…) to configure the catalog availability domain and subnet. Note that the catalog region will be US East (Ashburn).

    Note that Raft replication does not apply to the catalog. Data protection for the catalog is configured after the Globally Distributed Database is created. See Adding Catalog Data Guard Replication.

  6. Configure the remaining settings.

    Setting Description and Notes
    Create administrator credentials

    Set the SYS user password to access all of the shard databases and catalog databases in the configuration.

    SSH Keys

    Generate, upload, or paste the SSH keys, and save these keys for later database connection.

    Encryption key

    Select the vault and master encryption key that were configured in Task 5. Configure Security Resources.

    Select private endpoint Select the private endpoint that was created for this Distributed ExaDB-XS in Common Network Resources.
    Advanced options: Shard configuration - Chunks

    Under Advanced Options you can optionally configure the number of chunks per shard.

    Advanced options: Shard configuration - Replication unit

    Displays the number of Raft replication units that will be created. A distributed database with Raft replication contains multiple replication units. A replication unit (RU) is a set of chunks that have the same replication topology. Each RU consists of a leader and replicas and those are placed on different shards.

    Advanced options: Extended VM cluster capacity

    Configure extended capacity in ECPUs per VM.

    Advanced options: VM file system storage

    Specify file system storage capacity per VM.

    Advanced options: Smart flash cache

    Specify percentage of storage capacity to add as smart flash cache.

    Advanced options: Select character sets

    Select the Character set and National character set that will be used in all of the shard and shard catalog databases.

    The AL32UTF8 character set is recommended by default for character set and the AL16UTF16 character set is recommended by default for National character set.

    Select ports

    Enter the GSM listener port, ONS port (local), ONS port (remote), and SCAN listener port.

    Note:

    The ONS port (remote) number must be unique to each Globally Distributed Database. Do not reuse a port number used in another Globally Distributed Database unless a delete operation is fully processed on the original.
    Advanced options: Diagnostics collection

    Enable diagnostic events, health monitoring, and logs.

    Advanced options: Database backups

    Enable and schedule automated database backups.

    See Exadata Database Service on Exascale Infrastructure documentation for information about the settings.

  7. Select Validate to let Distributed ExaDB-XS run validation tests against the configuration.

  8. Once any validation errors are addressed and validation is successful, select Create to create the resources, VM clusters, and so on.

    Now the Distributed ExaDB-XS display name appears in the list while the creation operation runs.

    Creation can take a while, because several tasks are performed as part of the create operation, including host procurement, VM deployment, installing software, and generating certificates for the shard directors (GSMs).

    You can monitor the operation status in the State column and select the Distributed ExaDB-XS display name to track progress in the Work requests tab.

    When the status of all of the shards on the Shards tab is Available, Distributed ExaDB-XS creation is complete and successful.

    Caution:

    After creating a Distributed ExaDB-XS, do not move any of its vaults or keys or the Distributed ExaDB-XS will not work.