Compute Management and Billing in Autonomous Database on Dedicated Exadata Infrastructure

Oracle Autonomous Database on Dedicated Exadata Infrastructure uses specific algorithms to allocate and bill for usage of the compute used by Autonomous Databases. Understanding these algorithms can help you determine how best to create and configure your Autonomous Databases to meet performance goals in the most cost-effective fashion.

Tip:

You can track the different compute (CPU) attributes discussed in this article from the Details page of an Autonomous Exadata VM Cluster (AVMC) or Autonomous Container Database (ACD). For guidance, refer to Resource Usage Tracking.

Compute Models in Autonomous Database on Dedicated Exadata Infrastructure

Autonomous Database on Dedicated Exadata Infrastructure offers two compute models while configuring your Autonomous Database resources. They are:
  • ECPU: An ECPU is an abstracted measure of compute resources. ECPUs are based on the number of cores elastically allocated from a pool of compute and storage servers. You need at least 2 ECPUs to provision an Autonomous Database.

    While provisioning a new database, cloning an existing database, and scaling up or down the CPU resources of an existing database, the CPU count defaults to 2 ECPUs, in increments of 1. For example, you cannot assign 3.5 ECPUs to a database. The next available number of ECPUs above 3 is 4.

    Note:

    You can create Autonomous Database for Developers instances on ECPU based container databases. They are free Autonomous Databases that developers can use to build and test new applications. See Autonomous Database for Developers for more details.
  • OCPU: An OCPU is a physical measure of compute resources. OCPUs are based on the physical core of a processor with hyper-threading enabled.

    While provisioning a new database, cloning an existing database, and scaling up or down the CPU resources of an existing database:

    • The CPU count defaults to 1 OCPU, in increments of 1. For example, you cannot assign 3.5 OCPUs to a database. The next available number of OCPUs above 3 is 4.
    • For databases that do not need an entire OCPU, you can assign OCPUs from 0.1 to 0.9 in increments of 0.1 OCPUs. This allows you to overprovision CPU and run more databases on each infrastructure instance. Refer to CPU Overprovisioning for more details.

    You can choose a compute model while provisioning an Autonomous Exadata VM Cluster resource. The compute type selected for an Autonomous Exadata VM Cluster applies to all its Autonomous Container Databases and Autonomous Database instances.

Compute Management in Autonomous Database

Autonomous Database instances are deployed into an Autonomous Exadata VM Cluster (AVMC) and into one of its child Autonomous Container Databases (ACD). Exadata Infrastructures are capable of running multiple AVMCs. The CPUs that you allocate while provisioning an AVMC resource will be the total CPUs available for its Autonomous Databases. When you create multiple AVMCs, each AVMC can have its own value for total CPUs.

Multiple VM Autonomous Exadata VM Cluster is not available on any Oracle Public Cloud deployment of Exadata Infrastructure (EI) resources created before the launch of the Multiple VM Autonomous Database feature. For X8M generation and above Exadata Infrastructure resources created after the Multiple AVMC feature launch, each AVMC is created with one cluster node for each of the servers of the Exadata system shape you choose. For information about constraining these total CPUs across different Groups of users, see How Compartment Quotas Affect CPU Management.

Note:

Creating AVMC and ACD resources does not initiate billing. So, even though you assign a total CPU count to an AVMC and each ACD consumes 2 OCPUs or 8 ECPUs per node when created, these CPUs are not billed. Only once you provision Autonomous Databases in an AVMC and an underlying ACD, and that database is actively running, will the CPUs used be billed. As a result, you can create ACDs within AVMCs to organize and group your databases according to your lines of business, functional areas, or some other technique without worrying about incurring costs.

Note:

The maximum number of AVMC and ACD resources you can create on a given Exadata Infrastructure varies based on the generation of hardware. Please refer to Resource Limits and Characteristics of Infrastructure Shapes for details on constraints for each generation.

At an AVMC or ACD level, the total number of CPUs available for creating databases is called available CPUs. At the AVMC resource level, available CPUs will be equal to the total CPUs until you create the first ACD. Once you create an ACD, 2 OCPUs or 8 ECPUs per node are allocated to the new ACD from the AVMC's available CPUs. So, the available CPUs at the AVMC resource level reduces accordingly. When you create the first Autonomous Database in that ACD, the new database consumes the initially allocated CPUs (2 OCPUs or 8 ECPUs per node). If the new database needs more than 2 OCPUs or 8 ECPUs, they get assigned from the parent AVMC's available CPUs, there by reducing the available CPUs at the parent AVMC level. As you create more ACDs and provision Autonomous Databases within each ACD, the available CPU value changes accordingly.

Available CPUs at the Autonomous Exadata VM Cluster level applies to all its Autonomous Container Databases. This count of CPUs available to the container database becomes important if you are using the auto-scaling feature, as described in CPU Allocation When Auto-Scaling.

Note:

When you create an Autonomous Database, by default Oracle reserves additional CPUs to ensure that the database can run with at least 50% capacity even in case of any node failures. You can change the percentage of CPUs reserved across nodes to 0% or 25% while provisioning an ACD. See Create an Autonomous Container Database for instructions. These additional CPUs are not included in the billing.

Similarly, when you manually scale the CPUs of an Autonomous Database up, CPUs are consumed from the available CPUs at its parent AVMC level and its value changes accordingly.

When an Autonomous Database is running, you are billed for the number of CPUs currently allocated to the database, whether specified at initial creation or later by a manual scaling operation. Additionally, if auto-scaling is enabled for the database, you are billed for each second for any additional CPUs the database is using as the result of being automatically scaled up. See CPU Billing Details for more information about how billing is measured and computed.

When an Autonomous Database is stopped, you are not billed. However, the number of CPUs allocated to it are not returned to the available CPUs at its parent AVMC level for the overall deployment.

When an Autonomous Database is terminated or scaled down, the number of CPUs allocated to it are not immediately returned to the available CPUs at its parent AVMC level for the overall deployment. They continue to be included in the count of CPUs available to its parent container database until that parent container database is restarted. These CPUs are called reclaimable CPUs. Reclaimable CPUs at the parent AVMC level is the sum of reclaimable CPUs of all its ACDs. When an ACD is restarted, it returns all its reclaimable CPUs to the available CPUs at its parent AVMC level.

Note:

Restarting an Autonomous Container Database (ACD) is an online operation, done in a rolling manner across the cluster, and will not result in application downtime if configured according to best practices to use Transparent Application Continuity.

CPU Allocation When Auto-Scaling

The auto-scaling feature enables an Autonomous Database to use up to three times more CPU and IO resources than its allocated CPU count. In case of CPU overprovisioning, if three times the CPU count results in a value less than 1, it will be rounded to the next whole number. See CPU Overprovisioning for more details.

To ensure that no single Autonomous Database can auto-scale up to consume all CPUs available in the pool for the overall deployment, Oracle Autonomous Database on Dedicated Exadata Infrastructure uses the Autonomous Container Database as a limiting control.

While provisioning an auto-scaling enabled Autonomous Database in an ACD, if the available CPUs in that ACD is less than 3X CPU value of the new database, then additional CPUs will be reserved in that ACD. These CPUs are called reserved CPUs. Reserved CPUs ensure that the available CPUs at an ACD level are always greater than or equal to 3x CPU value of the largest auto-scaling enabled database in that ACD. These reserved CPUs can still be used to create or manually scale Autonomous Databases in this ACD.

When automatically scaling up an Autonomous Database, Oracle Autonomous Database on Dedicated Exadata Infrastructure looks for idle CPUs in its parent container database. If idle CPUs are available, the Autonomous Database is scaled up; otherwise, it is not. Databases inherently have a lot of idle time, so auto-scaling is a way to maximize resource usage while controlling costs and preserving good isolation from databases in other Autonomous Container Databases.

If the CPU used to auto-scale an Autonomous Database came from another running Autonomous Database that is lightly loaded and so not is using all of its allocated CPUs, Oracle Autonomous Database on Dedicated Exadata Infrastructure automatically scales the auto-scaled database down if the load increases on the other database and it needs its allocated CPU back.

Consider the example of an Autonomous Container Database hosting four running 4-CPU Autonomous Databases, all with auto-scaling enabled. The count of CPUs available to the container database for auto-scaling purposes is 12. Should one of these databases need to be auto-scaled past 4 CPUs due to load increase, Oracle Autonomous Database on Dedicated Exadata Infrastructure will only perform the auto-scaling operation if one or more of the other databases are lightly loaded and not using all allocated CPUs. The billing cost of this example is 16 CPUs at a minimum because all four 4-CPU databases are always running.

By contrast, consider the example of an Autonomous Container Database hosting four running 2-CPU Autonomous Databases, all with auto-scaling enabled, and one stopped 8-CPU Autonomous Database. The count of CPUs available to the container database for auto-scaling purposes is again 16. Should one of the running databases need to be auto-scaled due to load increase past 2 CPUs, Oracle Autonomous Database on Dedicated Exadata Infrastructure can perform the operation using CPUs allocated to the stopped 8-CPU database. In this example, the four running databases can consume up to a total of 8 additional CPUs simultaneously without consuming each other's allocated CPUs. The billing cost of this example is only 8 CPUs at a minimum because only the four 2-CPU databases are always running.

For any Autonomous Data Guard service instance, local or cross-region, the additional pricing will be the number of ECPUs or OCPUs you reserved when you created or explicitly scaled your primary service instance, regardless of whether auto scaling is enabled or not. Auto scaling-related ECPU or OCPU consumption on primary service instances does not occur on Autonomous Data Guard Standby service instances.

CPU Billing Details

Oracle Autonomous Database on Dedicated Exadata Infrastructure computes CPU billing as follows:

  1. CPU usage for each Autonomous Database is measured each second in units of whole OCPU or 4 ECPUs. A stopped Autonomous Database uses zero OCPU or ECPU. A runningAutonomous Database uses its allocated number of OCPUs or ECPUs plus any additional OCPUs or ECPUs due to auto-scaling.
  2. The per-second measurements are averaged across each hour interval for each Autonomous Database.
  3. The per-hour averages for the Autonomous Databases are added together to determine the CPU usage per hour across the entire Exadata Infrastructure resource.

How Compartment Quotas Affect CPU Management

Normally, when you create or scale up an Autonomous Database, the ability of Oracle Autonomous Database on Dedicated Exadata Infrastructure to satisfy your request depends only on the availability of unallocated CPUs in the single pool of CPUs across the entire deployment.

However, you can use the compartment quotas feature of Oracle Cloud Infrastructure to further restrict, on a compartment by compartment basis, the number of CPUs available to create, manually scale and auto-scale autonomous databases of each workload type (Data Warehouse or Transaction Processing) individually.

In brief, you use the compartment quotas feature by creating set, unset and zero policy statements to limit the availability of a given resource in a given compartment. For detailed information and instructions, see Compartment Quotas.

How VM Cluster Nodes Affect CPU Management

The preceding discussion of CPU management and allocation states that you can create multiple Autonomous Exadata VM Cluster (AVMC) resources by choosing the CPU count per node while provisioning the AVMC resource.

This section will discuss granular details about how Oracle Cloud Infrastructure places autonomous databases in the VM cluster nodes, and the consequences of such placement on auto-scaling and parallel processing.

  • When you create an Autonomous Database, its CPU allocation is split across multiple VM cluster nodes if the CPU count you specify is greater than the database split threshold defined at the ACD level, or the number of CPUs per node for the AVMC, whichever is smaller.

    For example, suppose you created an AVMC resource with two nodes and 5 OCPUs per node and you created an ACD in this AVMC with the database split threshold set to 16. As 5 is smaller than 16, any Autonomous Database with a CPU requirement greater than 5 will be split and opened across multiple nodes, allowing DML requests across those nodes. However, if the AVMC was created with two nodes and 20 OCPUs per node, any database with an OCPU requirement greater than 16 will be split and opened across multiple nodes.

  • When you manually scale an Autonomous Database, its CPU allocation will be rebalanced to match this same allocation model. Single VM cluster node is used for CPU counts less than or equal to the database split threshold set at the ACD level, or the CPU count per node of the AVMC resource, whichever is smaller; split across multiple VM cluster nodes for CPU counts greater than the database split threshold set at the ACD level, or the CPU count per node of the AVMC resource, whichever is smaller.

How an Autonomous Database's CPU allocation is distributed across VM cluster nodes affects the following operations:
  • Auto-scaling:
    • Auto-scaling may occur within a single VM cluster node for non-parallelizable DML and across VM Cluster nodes if the DML is parallelizable.
    • Multiple concurrent session with non-parallelizable queries may be routed to all nodes in the cluster, effectively allowing auto-scaling across all nodes in a multi-node database.
  • Parallel Processing:
    • Parallel processing of SQL statements occurs within Autonomous Exadata VM cluster nodes that are open, first within a single node, and then in adjacent open nodes, which as discussed above will depend on the size of the Autonomous Exadata VM Cluster.

Based on the resource utilization on each node; not all the values of the available CPUs can be used to provision or scale Autonomous Databases. For example, suppose you have 20 CPUs available at the AVMC level, not all the values from 1 to 20 CPUs can be used to provision or scale Autonomous Databases depending on the resource availability at the node level. The list of CPU values that can be used to provision or scale an Autonomous Database is called provisionable CPUs.

When you try to provision or scale an Autonomous Database from the OCI console, the CPU field will give you a dropdown with the list of provisionable CPUs. Alternatively, you can use the following APIs to get the list of provisional CPU values:
  • GetAutonomousContainerDatabase returns a list of provisionable CPU values that can be used to create a new Autonomous Database in the given Autonomous Container Database. See GetAutonomousContainerDatabase for more details.

  • GetAutonomousDatabase returns a list of provisionable CPU values that can be used for scaling a given Autonomous Database. See GetAutonomousDatabase for more details.