Creating a Globally Distributed Autonomous Database Resource

A Globally Distributed Autonomous Database resource contains the connectivity and configuration details of the shards and shard catalog databases.

You create the resource in the Globally Distributed Autonomous Database home page.

  1. Log in to the Console as a user with permissions to create Globally Distributed Autonomous Database resources, and navigate to the Globally Distributed Autonomous Database home page.

  2. Click Create Globally Distributed Autonomous Database.

    This will open a three step wizard.

  3. In step 1, Configure Globally Distributed Autonomous Database:

    Provide the following information.

    Setting Description and Notes
    Compartment

    Select a compartment to host the Globally Distributed Autonomous Database resource

    Display name

    Enter a user-friendly description or other information that helps you easily identify the Autonomous Database.

    Avoid entering confidential information.

    You can modify this name after resource creation.

    Database name prefix

    This prefix is appended to all of the database names in the configuration for ease of use.

    Deployment type

    This setting is not configurable. Only Dedicated Infrastructure is supported.

    Database version

    You can select release 19c or 23ai

    Workload type

    This setting is not configurable. Only Transaction Processing is supported.

  4. In step 2, Configure Shards and Catalog, in Configure Shards, provide the following information according to the release you selected previously.

    19c Configuration Settings

    Setting Description and Notes
    Automated

    Data is automatically distributed across shards using partitioning by consistent hash. The partitioning algorithm evenly and randomly distributes data across shards.

    User managed

    Lets you explicitly specify the mapping of data to individual shards. It is used when, because of performance, regulatory, or other reasons, certain data needs to be stored on a particular shard, and the administrator needs to have full control over moving data between shards.

    Note:

    When you choose User managed data distribution, your Shards configuration settings apply to the shardspace rather than the shard itself.
    Shard count

    Enter the total number of shards to initially deploy in the Globally Distributed Autonomous Database. You can configure up to 10, and then add more later if needed.

    Shards

    In the upper right corner of the Configure Shards pane, you can toggle between a default list view and a Map view.

    The Map view filters and shows the available Exadata clusters where shards could be deployed. To create shards in the map, click on the available regions, then click Configure Shards. If you wish, you can toggle to the form view and refine the configuration.

    Primary region

    Select the primary region where you would like to host your shard

    Primary VM cluster

    Select a cluster available in the selected primary region.

    Shard/Shardspace name

    Shows the display name for each shard or shardspace in the configuration. Once you select a region the name is populated.

    ECPU

    Enter the number of ECPU cores to enable for each shard. Specify the number of ECPUs as an integer. Available cores are subject to your tenancy’s service limits.

    You must enter a minimum of 2 ECPUs per shard.

    ECPUs are based on the number of cores, elastically allocated, from the shared pool of Exadata database servers and storage servers. Aggregated ECPU consumption on a given cluster is 1.5 times the ECPU count.

    Note that a number of ECPUs are consumed in overhead and are not available to the shards.

    See Oracle Cloud Infrastructure Documentation for more information.

    ECPU auto scaling

    Enable automatic scaling based on workload per shard/shardspace. This value is passed on to the Autonomous Database so that it can manage ECPU auto scaling.

    Storage

    GB of storage to allocate to your database

    Enable Data Guard

    Instantiates Oracle Data Guard standby databases for each shard.

    Data Guard region

    Select the region where you would like to host the shard's Data Guard standby

    Data Guard VM Cluster

    Select a cluster available in the selected Data Guard region.

    Configure Catalog

    You can choose to use the same configuration that is applied to the shards, or uncheck the box and make selections that apply only to the catalog database. The same fields are as described above for Shards.

    Create administrator credentials

    Create the user that will be able to access the shard catalog and all of the shards in the configuration.

    Encryption key

    The encryption key settings you configure depend on the data distribution type you chose above.

    Automated - All shards have the same encryption vault and encryption key, and is mandatory.

    User managed - Each shard can have the same or different encryption key details, and is optional.

    For both cases:

    • Based on the primary region that you selected for the first shard, you select the vaults and encryption key available in that region and selected compartment.
    • If Data Guard is enabled for a shard, and if the standby region is not the same as the primary region for that shard, you can select virtual private vaults that are replicated in the standby region.
    Select character sets

    Select the Character sets and National character sets that will be used in all of the shard and shard catalog databases. The AL32UTF8 character set is recommended by default for character sets and the AL16UTF16 character set is recommended by default for National character sets.

    Select ports

    Enter the Listener port, ONS port (local), and ONS port (remote) .

    Note:

    The ONS port (remote) number must be unique to each Globally Distributed Autonomous Database. Do not reuse a port number used in another Globally Distributed Autonomous Database unless a delete operation is fully processed on the original.
    TLS

    TLS port - TLS port number

    Note:

    The TLS port number must be unique to each Globally Distributed Autonomous Database. Do not reuse a port number used in another Globally Distributed Autonomous Database until a delete operation is fully processed on the original.

    Cluster certificate common name - Identifies a similar group of clusters. Enter a name that is 3 to 64 characters and can contains letters, numbers, hyphens(-), underscores(_), and dots(.)

    The Cluster certificate common name must match the certificate common name that was used when the clusters were created.

    Advanced options: Chunks

    Under Advanced Options you can optionally configure the number of chunks per shard. This setting is only applicable when Automated data distribution is selected.

    Advanced options: Tags

    Under Advanced Options you can add tags to the Globally Distributed Autonomous Database resource. These can also be added after creation.

    23ai Configuration Settings

    Setting Description and Notes
    Automated

    Data is automatically distributed across shards using partitioning by consistent hash. The partitioning algorithm evenly and randomly distributes data across shards.

    User managed

    Lets you explicitly specify the mapping of data to individual shards. It is used when, because of performance, regulatory, or other reasons, certain data needs to be stored on a particular shard, and the administrator needs to have full control over moving data between shards.

    Note:

    When you choose User managed data distribution, your Shards configuration settings apply to the shardspace rather than the shard itself.

    Note that when Raft is selected in Replication type the User managed option is disabled.

    Shard count

    Enter the total number of shards to initially deploy in the Globally Distributed Autonomous Database. You can configure up to 10, and then add more later if needed.

    Replication type

    Raft replication creates replication units consisting of sets of chunks and distributes them automatically among the shards to handle chunk assignment, chunk movement, workload distribution, and balancing upon scaling.

    Note that when Raft is selected the User managed data distribution option is disabled.

    Data Guard is a shard-level replication solution which instantiates Oracle Data Guard standby databases for each shard.

    Replication factor

    If Raft replication type is selected, you can set the Replication factor.

    Replication factor is the number of replicas in a replication unit. This number includes the leader replica and its followers.

    Shard

    Shows the display name for each shard or shardspace in the configuration. Once you select a region the name is populated.

    Region/Primary region

    Select the region where you would like to host your shard

    If Data Guard is the selected replication type this is the Primary region.

    Automated data distribution with Data Guard replication type does not support shards in multiple regions.

    VM cluster/Primary VM cluster

    Select a cluster available in the selected region.

    If Data Guard is the selected replication type this is the Primary VM cluster.

    ECPU

    Enter the number of ECPU cores to enable for each shard. Specify the number of ECPUs as an integer. Available cores are subject to your tenancy’s service limits.

    You must enter a minimum of 2 ECPUs per shard.

    ECPUs are based on the number of cores, elastically allocated, from the shared pool of Exadata database servers and storage servers. Aggregated ECPU consumption on a given cluster is 1.5 times the ECPU count.

    Note that a number of ECPUs are consumed in overhead and are not available to the shards.

    See Oracle Cloud Infrastructure Documentation for more information.

    ECPU auto scaling

    Enable automatic scaling based on workload per shard/shardspace. This value is passed on to the Autonomous Database so that it can manage ECPU auto scaling.

    Storage

    GB of storage to allocate to your database

    Data Guard

    If Data Guard is the selected replication type, this toggle enables or disables Data Guard replication on the selected shard.

    If enabled, an Oracle Data Guard standby database is instantiated for the shard.

    Data Guard region

    If Data Guard is the selected replication type, select the region where you would like to host the shard's Data Guard standby

    Data Guard VM Cluster

    If Data Guard is the selected replication type, select a cluster available in the selected Data Guard region.

    Configure Catalog

    You can choose to use the same configuration that is applied to the shards, or uncheck the Same as Shard's configuration box and make selections that apply only to the catalog database. The same fields are as described above for Shards.

    Note that Raft replication type does not apply to the catalog. You can uncheck Same as Shard's configuration and configure Data Guard if you want catalog replication.

    Create administrator credentials

    Create the user that will be able to access the shard catalog and all of the shards in the configuration.

    Encryption key

    The encryption key settings you configure depend on the data distribution type you chose above.

    Automated - All shards have the same encryption vault and encryption key, and is mandatory.

    User managed - Each shard can have the same or different encryption key details, and is optional.

    For both cases:

    • Based on the primary region that you selected for the first shard, you select the vaults and encryption key available in that region and selected compartment.
    • If Data Guard is enabled for a shard, and if the standby region is not the same as the primary region for that shard, you can select virtual private vaults that are replicated in the standby region.
    Select character sets

    Select the Character sets and National character sets that will be used in all of the shard and shard catalog databases. The AL32UTF8 character set is recommended by default for character sets and the AL16UTF16 character set is recommended by default for National character sets.

    Select ports

    Enter the Listener port, ONS port (local), and ONS port (remote) .

    Note:

    The ONS port (remote) number must be unique to each Globally Distributed Autonomous Database. Do not reuse a port number used in another Globally Distributed Autonomous Database unless a delete operation is fully processed on the original.
    TLS

    TLS port - TLS port number

    Note:

    The TLS port number must be unique to each Globally Distributed Autonomous Database. Do not reuse a port number used in another Globally Distributed Autonomous Database until a delete operation is fully processed on the original.

    Cluster certificate common name - Identifies a similar group of clusters. Enter a name that is 3 to 64 characters and can contains letters, numbers, hyphens(-), underscores(_), and dots(.)

    The Cluster certificate common name must match the certificate common name that was used when the clusters were created.

    Advanced options: Chunks

    Under Advanced Options you can optionally configure the number of chunks per shard. This setting is only applicable when Automated data distribution is selected.

    Advanced options: Replication unit

    Available for release 23ai only

    If Raft replication type is selected, you can configure Replication unit.

    Under Advanced Options you can optionally configure the number of replication units created for the Globally Distributed Autonomous Database.

    When Raft replication is enabled, a Globally Distributed Autonomous Database contains multiple replication units. A replication unit is a set of chunks that have the same replication topology.

    Advanced options: Tags

    Under Advanced Options you can add tags to the Globally Distributed Autonomous Database resource. These can also be added after creation.

  5. Click Next to review the configuration details.

  6. If everything on the summary page is correct, click Validate to run validation against the configuration.

  7. Once any validation errors are addressed and validation is successful, click Create.

    After you click Create, the Globally Distributed Autonomous Database display name appears in the list while the creation operation runs.

    The creation operation can take a while, because several tasks are performed as part of the create operation, including host procurement, installing software, and generating certificates for the shard directors (GSMs).

    You can monitor the operation status in the State column and track progress in the Work request tab. When the shard status is Available, Globally Distributed Autonomous Database creation is complete and successful.

    Caution:

    After a user creates a Globally Distributed Autonomous Database, do not move vaults and keys or the Globally Distributed Autonomous Database will not work.
  8. When the Create process is complete you can continue to Managing Certificates, so you can download, sign, and upload the certificates for the GSMs.